HUMAN AND ARTIFICIAL MINDS Computer science, cognitive

 1 A. INTRODUCTION:
HUMAN AND ARTIFICIAL MINDS
Computer science, cognitive psychology, and neuroscience have a history of shared questions and
inter-related advances. In his famous 1950 paper introducing the Turing Test, a paper published in
Mind – A Quarterly Review of Psychology and Philosophy, Alan Turing (1) explicitly formulated the
computer in terms of mimicking how humans do computations. Indeed, he introduced the word
“computer” because that was the job title of humans who did numerical calculations by hand. In that
paper, Turing also gave us the current meaning of the word “programming,” writing: “If one wants to
make a machine mimic the behaviour of the human computer in some complex operation one has
to ask him how it is done, and then translate the answer into the form of an instruction table.
Constructing instruction tables is usually described as “programming."
Evolving knowledge of the brain and its operations have also informed advances in
computer science. Indeed, the history of neural networks – and the original use of the term – is
roughly coextensive with that of modern computing machines. An early mathematical model of a
single neuron was suggested by neuroscientists McCulloch and Pitts (2). John von Neumann and
Alan Turing both explored network models inspired by McCulloch and Pitts’ earlier work (3).
Marvin Minsky (4), a leading theorist of artificial intelligence in the 1970s, summed up the
intertwined goals of cognitive psychology, human neuroscience and artificial intelligence this way: “I
draw no boundary between a theory of human thinking and a scheme for making an intelligent
machine; no purpose would be served by separating these today since neither domain has theories
good enough to explain – or to produce – enough mental capacity.” Although we know much more
today than we did in 1974 about human cognition and its neural underpinnings, and although
computer science and its ability to build smart systems has advanced at a breakneck pace,
Minsky's conclusion is still apt, especially with respect to understanding and producing systems that
learn.
Learning – adaptive, intelligent change in response to experience – is a core property of
human cognition. The brain is a complex system that changes in response to the activations evoked
by sensory input; the brain also helps create the experiences from which we learn by directing the
actions of the body (5). Systems that learn have been the long-sought goal of artificial intelligence.
In the 1980s, a team of cognitive psychologists, computer scientists, and early computational
neuroscientists (Rumelhart, McClelland & The PDP Team, 6) extended McCulloch and Pitts’
network, figuring out how to automatically train multilayer networks of neuron-like input–output
pairs. However, at the start of the 21st century, computer science, neuroscience, and cognitive
psychology seemed to be moving apart in their approaches to learning – still borrowing from each
other and with individual scientists sometimes jumping from one field to the other – but with big
knowledge advances occurring in discipline-specific directions (3).
THE TIPPING POINT
In a remarkably short period of time, this has changed. Advances in technology and computing
power have enabled machine learning, neuroscience, and the study of human learning to move
from “toy,” that is, small-scale, approaches, to the study of learning from raw sensory input and to
do so at the massive scale that constitutes daily life. There is general agreement (3, 7-10) that the
advances in all three fields place us at the tipping point for powerful and consequential new insights
into mechanisms of (and algorithms for) learning. These advances have the potential to
revolutionize teaching and education, to yield behavioral interventions that precisely target change
in specific neural processes, and (through powerful new machine learning approaches) to impact all
of science as well as everyday life and society as a whole.
There is also an emerging consensus that the big breakthroughs will emerge through reunifying the sciences of human learning, human neuroscience, and machine learning. ‘Thoughtpapers’ are increasingly making explicit calls to researchers in machine learning to use human and
1
2 neural inspiration to build machines that learn like people (3, 7-10), and for researchers in human
cognition and neuroscience to leverage machine learning algorithms as hypotheses about cognitive
and neural mechanisms. Finally, the potential value of joining behavioral, neural, and machine
approaches to understanding learning is being realized – and generating much interest – by the
dramatic advances in machine learning arriving from research teams such as Google’s Deepmind,
teams recruited from cognitive psychology, human and primate neuroscience, computational
neuroscience, as well as computer science.
These calls for joining efforts in a unified theory of learning have been picked up by funding
agencies. For example, the National Science Foundation recently called for proposals to build
Collaborative Networks on Learning. This funding initiative seeks to fund new research teams in
human learning, neuroscience, and machine learning. In the CFP, NSF wrote: “Learning is
complex, and many investigators from multiple disciplinary perspectives conduct research on this
topic. Advances in the integration and accumulation of this knowledge notwithstanding, key
knowledge remains fragmented across and within disciplines. Progress is hampered by the jingle of
studying different phenomena under the same name, and the jangle of studying the same
phenomena under different names. A deep and comprehensive understanding of learning requires
integration across multiple perspectives and levels of analysis…. from the frontiers of science and
engineering…to innovation [that addresses] societal needs through research and education.”
THE NEXT FRONTIER
Schank (11), a seminal figure in the early days of artificial intelligence wrote (in the journal Cognitive
Psychology): “We hope to be able to build a program that can learn, as a child does….. instead of
being spoon-fed the tremendous information necessary.” At a top machine learning conference
(KDD) this summer, Professor Jitendra Malik, the Arthur J. Chick Professor of Electrical
Engineering and Computer Science at the University of California at Berkeley and a leading expert
in machine learning and computer vision, was asked by a student how to prepare for the next big
advances in machine learning. He was quite clear in his response: “Go study developmental
psychology seriously and then bring that knowledge in to build new and better algorithms.” He
recommended a paper by PI Smith as the place to start (12).
Using children as inspiration for machine learning makes good sense: by the benchmarks of
speed of learning, amount and diversity of knowledge, robustness, generalization and innovation,
human children (in their everyday worlds albeit perhaps not in schools) are the best learning
devices on the face of the earth. This proposal seeks a unified understanding of learning from a
human developmental perspective that is expressed in exploitable algorithms. To the best of our
knowledge, we will be the first formed cross-disciplinary team to actively pursue through our
research programs (as opposed to merely writing thought papers about the value of) a
computational understanding – that is grounded in brain systems – of learning from a human
developmental perspective.
INDIANA’S SPECIAL OPPORTUNITY
It is not true in general that engineering solutions closely follow biological solutions (consider
airplanes). It is even less true that engineers attempt to build solutions that follow biological
developmental pathways. And although engineering often provides new tools for measurements in
biology, it is not generally true that theories and algorithms developed in the context of engineering
problems have much to say about biological processes. What this emerging field of a science of
learning is trying to do – forming a new convergence of machine and human learning – is novel,
hard, and consequential. Indiana, with its recognized leadership in developmental science, has a
special role to play. Given financial investment at this tipping point, Indiana University can be at the
leading edge of this new field.
2
3 The team members are ready:
• We have been meeting for a year discussing the specific aims, our rationale and paths to
achieving them.
• We have been part of continuing collaborations relevant to this new field and have started new
ones directly relevant to this proposal.
• We have already talked to program officers at NSF and ONR and are being encouraged in our
efforts.
• We have already solved many of the cultural barriers that exist across disciplines (different
names for the same phenomena, different goals) that may stymie other groups.
• We operate within the interdisciplinary culture of the Cognitive Science Program, one of the toprated programs of its kind in the world and one that has a well-established culture of fostering
collaborative research that pushes through the bounds of narrow disciplinary perspectives.
• We already collaborate in interdisciplinary training with six members of the team also faculty
with the Training Program in Integrative Developmental Science, an NIH T32 funded program
now in its 22nd year (with 5 predoctoral and 3 postdoctoral lines).
• As our individual curriculum vitae and external grant records show, we are a group of faculty
with outstanding records of significant contributions to science, already leaders or emerging
leaders in the contributing fields.
B. SPECIFIC AIMS
Our goal is a new unified field of science that encompasses human and machine learning that
incorporates behavioral, neuroscientific, and computational insights across all three fields. The idea
is NOT that human and machine learning are or should be the same. Rather, the idea is that there
are formally specifiable principles that characterize learning in general, and that an integrated
approach will get us to those principles, resulting in useable new insights for both human learning
and for machines. This means that the proposed research is fundamentally different than typical
modeling or computational approaches. To succeed, we cannot just develop new algorithms that
beat current state of the art approaches in machine learning, nor simply detail new circuits in the
brain, nor just show that some training regimen is particularly effective with children. Nor will it be
enough to show that we can build a model that learns in some domain like children do or that
mimics some aspect of neural processing. For this new unified field to reach its promise, we also
have to do the hard analytic work of understanding how and why these algorithms, circuits, and
regimens have the effects that they do. This is the major innovation of the formation of this new
research area.
RESEARCH AIMS
The proposed research focuses on visual learning – faces, objects, letters, numbers and algebraic
equations as illustrated in the figure below. These images in the figure are taken from the ongoing
individual and collaborative research of team members – head-camera images from 1 month old to
2 year old infants from the research of Yu, Smith and Crandall (25-27), children’s hand written
letters and numbers from the work of James, Landy, and Smith (28-30), and algebra equations from
the work Landy and Goldstone (30).
Different(classes(of(visual(objects(
3
Development(
4 Visual learning is central to all aspects of human intelligence and a core interest within the research
programs of the contributing faculty. Although there are still many unsolved problems, more than 50
years of neuroscience research provides a well-documented understanding of the basic structure of
the human visual system. Machine vision is a rapidly progressing field and arguably the most
advanced domain in machine learning and, in deep learning nets, the core idea approximates the
bottom-up cortical pathways that serve human visual object recognition. The current state of
knowledge provides the foundation on which to pursue potentially transformative questions of how
young children so readily learn what they do and how we might build non-biological machines that
can do as well.
The research aims derive from three ideas.
1. Reuse.
Modularity, reuse and hierarchy are central principles of the human visual system and of the multilayered (deep) networks used in machine vision (13). Reuse may be defined as the use of the same
module (e.g., a layer in neural network) for solving very different problems (e.g., recognizing faces
and algebraic equations). However, another relevant sense of reuse is the “repurposing” of a
component for a task for which it was not originally intended. This is a compelling fact about human
learning. Specialized brain regions that served specific functions such as face recognition were
traditionally assumed to have evolved to fit those tasks. However, culture has changed more rapidly
than human brains have evolved. As a result, many modern tasks – reading, mathematics – are
performed by brains that evolved for other purposes (14). In brief, the brain solves new cognitive
tasks by reusing brain regions that could not have evolved for those purposes and does so by
forming new patterns of connectivity out of these components, and by adapting the internal
structure of those components through experience. Developmental and educational psychologists
do not talk about reuse. But they do talk about the “cascade” and “readiness” -- the far reach of past
learning into future learning. For example, early block play predicts later mathematics learning (15),
as does precision of visual discrimination of textured patterns (16). We seek, through the study of
reuse, to understand how past learning in one domain leaves hidden competencies and hidden
deficits that do not show their effect until later learning in some more difficult task. For example, by
hypothesis, infants’ early learning about visual objects tune and train components essential to
learning about visual symbols. Under Specific Aim 1, we will pursue the computational and
learning consequences of reuse by (1) testing the hypothesized predictive relations between
perceptual skills across domains in young children, (2) conducting training experiments with
children that measure the behavioral and neural consequences of training in one domain (e.g.,
objects) on later learning in another (e.g., letter recognition); and (3) conducting parallel
experiments on deep-learning networks. Again, we seek not just to achieve learning by the
algorithms but to analyze the internal workings of machine learners to understand the principles of
reuse. The computational value of reuse for learning systems is that it enables learners over a
series of different tasks to become progressively better learners of anything. The downside is that if
early learning is not optimal, later learning may falter, a downside highly relevant to understanding
the potent forces of poverty (and unequal early learning environments) on school achievement.
2. Self-generated training sets.
Before formal schooling, in their natural everyday worlds, children are not fed training data by some
external force that selects what to show them and when. Instead, young children learn by doing.
And what they do directly determines what they see and the data set for learning. This might be
viewed as a “feel-good” notion about active engagement and children’s interests. But we are
looking for more than this; we want to algorithmically understand the principles of self generated
training sets, and why and how they may produce robust inventive learning. One case we consider
4
5 in this proposal is how children learn basic object categories; their progress is markedly different
from state-of-the art machine vision. Young children learn their first 100 or so object categories
quite slowly – with many repetitions and examples; this part is much like current state of the art
machine vision (17). But after those 100 objects categories, they can learn a whole category from
experience with a single instance (17,18). For example, a single experience with a John Deere
tractor leads to adult-like use of the word “tractor” from that day forward. We believe this prowess
stems in part from the structure in training sets for visual object recognition that children present to
themselves. Current state-of-the-art machine vision uses training sets that are primarily
photographs of objects that are selected by the computer scientists doing the training. They
typically present as many different training instances of the object categories to be trained as
possible and then test generalization to new instances (new photographs). These training sets are
not at all like the training sets for everyday learning by children, as children actively select their own
visual experiences – by where they look, how long they look, by their body movements, by picking
objects up, by building towers of blocks or by scribbling. Under Specific Aim 2, we will conduct a
series of parallel human training and machine learning studies designed to reveal and exploit the
structure in self-generated data sets. The human learning studies also include pre- and posttraining imaging to understand how the dynamics of functional patterns of connectivity across multimodal brain regions support robust learning and generalization.
We will specifically examine the role of information selection by the learner (and its timing
with respect to current state of the learning), the dynamic structure of self-generated visual
information, and the use of multimodal information. Through this aim, we will delve deeper into the
creation of neural systems and apply this knowledge to the construction of biologically motivated
machine learners. Doing this will require us to move beyond the current feed-forward deep
networks used in machine vision to new approaches that incorporate time and re-entrant signals.
3. Percepts and concepts.
Thinking and reasoning are not the same as perceptual discrimination and categorization. Seeing
that the algebraic equations 5x+y and 5(x+y) are different does not mean that one understands the
meaning difference. However, significant evidence suggests that advanced mathematicians rely
heavily on trained perception and action routines, and moreover write equations in ways that
spatially highlight meaning (19,20). This suggests a potentially direct link between perceptual
learning and higher-level concepts. The nature of this link between perception and cognition is one
of profound interest both in human learning (behavioral and neural) and in machine learning
(21,22). In this project we will move into the territory of symbolic deep networks and models that
combine deep learning and variational inference (23,24). We will attempt to understand the relation
between neural networks and logical representations, bridging the gap between a system that can
learn to recognize and a system that can learn the function that generates the regularities in the
data set, and in so doing the power to invent and to reason. Under Specific Aim 3, we will
concentrate on two problems: (1) how children go from heuristics about how number names map to
multi-digit numbers to understanding the principles of base 10 notation; and (2) how the perceptual
and conceptual properties of algebraic equations interact to support high level mathematics
reasoning.
EXCELLENCE AIMS
Our goal is to build a community that collaborates across cognitive psychology, human
neuroscience, and artificial intelligence and that makes sustained high impact contributions beyond
the life of this grant and the specific projects proposed here. To build this new exciting and
consequential field, we cannot simply collaborate but need to become a community that is skilled in
thinking and integrating the fields and comfortable enough to challenge, to admit when we do not
know, and to be brave in designing new kinds of experiments that span people and machines.
However, no single scientist can be expert across all kinds of analysis and time scales relevant to
5
6 this project. Accordingly, developmental scientists must acquire both the conceptual and practical
skills that enable them to work in science teams in which members may bring expertise from
different disciplines with different cultures. Accordingly, in addition to the research itself, we have
three excellence aims:
(1) To become community with a shared vision of integrative and high-integrity science. To
this end, we attack the research aims as a true collaborative team, in integrated projects. That is,
the proposed research is not a stabled set of ongoing projects but a new (and daring) effort.
(2) To increase the size of the community at Indiana. To achieve a critical mass and world-class
impact, we need to hire additional individuals on the analytic-theory side of machine learning as its
links with human cognition and with human neuroscience. There are outstanding individuals
currently in PhD programs and post-doctoral positions. We also need to recruit and to be open to
other faculty already here at Indiana.
(3) To train the next generation in these field and in doing make a name for Indiana as one
that is truly building a transformative new field of learning.
C. RESEARCH DESIGN AND METHODS
Aim 1 - REUSE
Collaborative Team: Behavioral studies: Yu, Smith, Landy, Goldstone, James; Neuroimaging
studies and analytics: James, Pestilli, White, new hire; Machine Learning: Crandall, Yu, new hires;
Theoretical Analyses: Jones, Goldstone, Sporns, Natarajan, new hire.
Rationale. The figure (adapted from 31) is a cartoon of the
hierarchy of cell types that comprise the human visual
cortex. Lower cells selectively respond to simple lines at
specific retinal locations. Higher levels respond to distinct
categories – a tree, a face, the letter H. The feed forward
activation means that the response patterns of the higher
layers compute over the patterns of lower layers.
Recognizing a tree or a face or an H all depend on the
precision, tuning, and activation patterns of lower layers.
This is how past learning may create hidden competencies
or deficits that determine later learning. The second figure
(from 32) illustrates how disruptions at lower layers disrupt
higher layers. Disruptions in the response patterns in lower
layers could have minor or severe consequences, and, since
the system is nonlinear, these may be surprising. Further,
the consequence may not be observable in the ‘next level’
after the disruption, but may only be observable far
downstream. Another consequence of early layer deficits or
competencies are lateral effects: adjacent cell assemblies
can affect one another at the same level.
State of the art machine vision algorithms and deep learning networks have a similar hierarchical
structure as the layers as the human visual cortex. Indeed, they are called “deep” precisely because
of their relatively many layers of hierarchically organized neuron-like units (3, 7,8). Thus, these
deep-learning networks provide us with a way of studying and understanding the principles of reuse
and hidden competencies and deficits.
But how can we find – in human learners or in machine learners – hidden competencies and
deficits? What are the sign posts that some later learning is reusing – and being constrained by –
6
7 some earlier learned component? We will use two routes. The first is behavioral. What does it
mean to be able to recognize a dog or the letter H? In many human and machine recognition
studies, the standard is recognizing high quality instances under normal viewing conditions. But we
are looking for potentially subtle indications that processes involved in recognition of an earlylearned class of objects may have strengths or weaknesses that have downstream effects on later
learning. Accordingly, we will challenge recognition systems, measuring recognition in both optimal
and in very challenging suboptimal conditions. The second route is by looking inside the learner –
using neuro-imaging measures with humans and analyzing the activation patterns within layers for
the machine learners.
The specific research question motivating this project is how past learning, even long ago,
influences learning in a different matter. We focus on 4- to 6-year-old children learning about
letters, words, numbers, and multi-digit numbers. For children who are having difficulty in this new
learning, can we find traces of hidden deficits in simple visual discriminations, in face discrimination,
or in the recognition of common objects (two domains in which the dense periods of visual
experience are, respectively, early infancy and toddlerhood)? Can we find evidence of these
differences that mark the role of reuse through neuroimaging? Can we build these same patterns
of better and less good learners in machine learners and identify their hidden competencies and
deficits? And if we can, can we analytically determine the weak layer(s) of learning and is there a
way – after the fact – to fix it? For example, one recent theoretical analysis (33) of feed-forward
deep learning nets suggests that the weakest layer determines performance but also learns most
rapidly. The fact that the weakest layer learns faster than other layers could be advantageous. But
because human children use the same visual system to solve many different problems, later
learning that finds an optimal solution to one task (e.g., letters) might not find the optimal solution to
the many other tasks (e.g., multi-digit numbers) that need to be also learned. Our approach is
absolutely novel with deep relevance for understanding reuse, for understanding why preschool
learning matters, for developing effective education and remediation programs, and for
understanding how machine learners may optimize learning in multiple overlapping tasks.
Child study details. We will conduct two studies with 4 to 6 year old children, one measuring
competencies across domains (simple visual stimuli, faces, objects, numbers, letters); the other is a
training study. Both have neuroimaging components. In total, 200 children (representative samples
including 20% of children with family incomes that meet the standard for free-lunch at school) will
participate in the behavioral only studies and 100 total children will participate in the neuroimaging
component in addition to the behavorial component. In the Measuring Competencies study,
Children will be presented with tasks measuring their ability to detect, discriminate, and recognize
simple stimuli (lines), faces, common objects, letters, and numbers under optimal and suboptimal
viewing conditions. We will use a well-understood search task paradigm with which we have
experience (54) in which children are presented with visual arrays in which one object is different
from the rest and their task is to find that different object. We use eye-gaze (via eyetracking
systems) as the principal dependent variable. The arrays will be manipulated to challenge the visual
system including degree of clutter, contrast, and blur. The children in the smaller sample study will
also provide evidence of cortical processing of the same stimuli under these demands and
conditions. As we manipulate the categories and visual challenge, we will observe activation
patterns in different brain regions enabling us to observe how neural systems respond to these
challenges and to locate regions (layers) implicated in the observed individual differences in
children’s responding. In the Training Study, we will train children (in an 8 week training procedure
following the methods of James (28) and including pre- and post-test imaging). to improve line, face
and object detection under suboptimal conditions. Will this improve the performance with letter and
numbers, visual forms about which children this age are just beginning to learn?
Machine learning studies. We will mimic child study 1 by training feed-forward networks in face
and object recognition and then clamp learning in early layers (after different amounts of training) to
7
8 understand how differential learning in these areas limits or supports more rapid learning of letters
and numbers. We will purposely disrupt processing in individual layers to determine the outcome. In
an innovation with respect to measures of machine learning, we will test the networks under
suboptimal as well as optimal visual conditions and link performance to that of children. We will
analyze the trajectories of learning and how layer-specific learning changes as a function of
weaknesses and strengths in prior learning (33, 34). We will mimic child study 2 by then giving
these networks extensive training in suboptimal conditions (an unprecedented training procedure in
this field) and then retesting their performance and re-analyzing – layer by layer – their learning.
Significance. Our approach is innovative its parallel experiments with children and deep-learning
networks. Our approach is innovative in going beyond performance by using neuroimaging in
children and analyses of the internal workings of the networks to understand the internal processes.
Our use of challenging visual tests will allows us to discover hidden competencies and deficits – in
children and in machine learners. But perhaps more importantly, Study 1 highlights the importance
of understanding the underlying mechanisms of learning, such as reuse, in the lives of children. We
know that early inequalities in children’s environments have early effects – in brain and behavior –
prior to school that may determine children’s abilities to succeed in school. But we do not
understand the mechanisms nor the developmental pathways. Study 1 is a first but important step
built on a theoretical understanding of reuse in hierarchical learning systems, and in so doing it
goes beyond the usual approaches of teaching to the test, and randomized trials to see what works
without understanding why. Truly effective training requires a principled understanding of exactly
what is limiting learning and then knowing how – precisely – to go in and fix it.
Aim 2 – SELF-GENERATED TRAINING SETS
Collaborative Team: Behavioral studies: Yu, Smith, Landy, Goldstone, James; Analysis of the
statistical structure and dynamics in self-generated sets: Jones, Sporns, Smith, Yu, Ryoo;
Neuroimaging studies and analytics: James, Pestilli, White, new hire; Machine Learning: Ryoo,
Crandall, Yu, White, new hires; Theoretical Analyses: Jones, Goldstone, Sporns, Natarajan, new
hires.
Learning changes the human brain and those changes alter how human learners engage
with the world. This creates a brain-body-environment complex dynamic system in which learners
actively select and create the learning information, that information unfolds in time, and the learning
experience is inherently multimodal – involving more than vision. None of this is like the so-called
state of the art machine vision that trains feed-forward bottom up networks by showing them still
photographs selected by the trainer and presented in random order. For example, Yu, James,
Smith and Crandall’s pioneering work putting head cameras and head-mounted eye trackers on
toddlers as they played with objects has shown toddlers create dynamic views in which a single
object at a time dominates (e.g.,25-28; 35-36). These one-dominating object views are visually
larger than other objects in play because toddlers move their eyes and heads close to objects of
interest, because they use their hands (and short arms) to bring objects close for viewing. In this
work, we have also shown that the objects views that toddlers show themselves become
increasingly structured in ways likely to support more view-invariant object recognition and more
abstract shape perception; specifically they increasingly hold the object and rotate it major axis in
depth (27, 35-39). Machine vision models (e.g. 40-41) have shown that these rotations play a role in
building visual representations (at the top of the hierarchy of layers) that extract abstract 3dimensional shape; studies of toddlers show, in turn, that these self-generated rotations predict rate
of object name vocabulary development (38) and later learning about the mathematics of geometry
(42). Goldstone, Landy, Smith and James (29, 43-45) have also studied the self-generation of
visual information created when the learner writes letters and numbers or moves the components of
an algebraic equation around to solve the problem. All these behaviors both are products of what
the learner has already learned and determiners of the input for the next training moment. There
8
9 are many theoretical reasons (46-48) to suggest that these are computationally powerful properties
and may be essential to creating human-like learning. Here we seek to do what Professor Malik
proposed to those machine-learning students at KDD: seriously understand these developmental
phenomena and then write new and more powerful machine learning algorithms.
We will focus on four (deeply inter-related) computational consequences of self-generated training
sets: (1) selection of the visual information itself; (2) the time scales of mathematically smooth
visual experiences; (3) prediction and just-in-time new information that depends on the learners
current state; and (4) the role of multi-modal time-locked information. We will study learning in
three contexts: (1) toddlers’ visual object recognition and its relation to manual activities with
objects; (2) 4 year olds’ learning about letters and numbers, including multi-digit numbers; and (3)
12 to 14 year olds’ learning about algebra. We will conduct 4 kinds of studies:
(1) Collecting and analyzing the training set as created by active learning. We will collect and
analyze the dynamic, learning dependent, and multimodal nature of real-world data sets from the
learner’s point of view. We will use head-mounted eye trackers (we have considerable
experience, even with toddlers, e.g., (35)) to collect both moment to moment scene changes as the
learner engages with the material and to collect momentary eye-gaze and the precise sampling of
available information, because what the learner looks at determines what the learner can learn. We
will conduct the experiments in a shared smart room facility in Psychological and Brain Sciences
that will enable collecting dense time data on all body movements (heads, hands, trunk). We will
study two different visual learning tasks: (1) toddlers learning to visually recognize everyday objects
(e.g. cup, spoon) via active play (12 to 24 months, n=100); 4 year olds (n=100) learning about lesseveryday objects (e.g., garlic press, vise) through play as well as about letters and numbers by
writing, copying, and creating letters and numbers out of sticks. We will also measure recognition
and discrimination of these entities in optimal and suboptimal conditions. The goal is to
characterize the structure in these self-generated training sets, the individual differences in that
structure, and their relation to individual differences in object recognition performance. These data,
important in their own right, are also critical to developing new algorithms that can exploit this
structure.
(2) Training experiments will be conducted in which 4 year olds (n=300) are trained (8 weeks,
again following the procedures of James, see (28,43)) via self-generated actions, by watching the
dynamic visual events created by others’ actions, and through static pictures. Half the children will
be trained on all three tasks, half on just one (e.g., just faces or just letters). This allows us to
assess the role of optimizing learning across different categories of stimuli rather than one class. All
children will be tested for recognition in all domains and detection under optimal and suboptimal
conditions of visual information. A subset (n=100) of the children will also participate in pre- and
post- neuroimaging experiments. These experiments build on the groundbreaking work of James
(28, 43, 49, 50) who has shown the direct benefit of action and the motor system involvement in
visual learning and in temporal processing. One critical area is the Supplementary Motor Areas
(pre-SMA and SMA proper) in the frontal cortex. This is involved in several aspects of integrating
information over time, including association at the sub-second and supra-second level, sensory
information, and sensorimotor information (51). These regions both receive information from – and
transmit information to – the visual system, creating a network that may underlie learning through
temporal association of what has been done or seen and what is done or seen next. This region
becomes involved when learners act – by writing, dragging, or moving objects – but not when
watching. We will seek to document this effect, and the role of motor behavior and planning, in
visual learning more generally. Critical to the analyses of the neuroimaging data in concert with the
multidimensional properties, Pestilli, White and Sporns are developing a new computational
framework to encode the multi-dimensional aspects of brain connectivity, concurrent and previous
behavior, and visual input.
(3) Modeling experiments and analysis. Feedforward networks are useful as models for an initial
understanding of neuronal signaling as one goes up the hierarchy, and they explain a fair bit about
9
10 vision and visual learning. But they are fundamentally unlike the brain in terms of their connectivity
and dynamics and are limited to the computation of static functions. Rather than computing a static
function on each of a series of image frames, vision takes a time-continuous input stream and
interprets it through ongoing recurrent computations that include input from other modalities and at
multiple time scales. We will train machine learners to match the training of self-generated data sets
in component 1 and each of the training conditions of component 2. We will use learner generated
video information to train networks, an area in which we are already collaborating and have
expertise (25, 55, 56). Table 1 (derived from papers by 7,8, 46-48) provides a further map to our
plan of attack in linking the behavioral evidence on learning to neural processes to algorithms.
Computation
Potential algorithmic/
representational
realization(s
Potential neural implementation(s)
Putative brain
location(s)
Rapid
perceptual
classification
Receptive fields, pooling
and local contrast
normalization
Hierarchies of simple and complex
cells
Visual system
Complex
spatiotemporal
pattern
recognition
Bayesian belief
Feedforward and feedback pathways
in cortical hierarchy
Sensory
hierarchies
Learning
efficient coding
of inputs
Sparse coding
Continuous or discrete
attractor states in
networks
Thresholding and local competition
Sensory and
other systems
Decision
making
Reinforcement learning
of action- selection
policies in PFC/BG
system
Reward-modulated plasticity in
recurrent cortical networks coupled
with winner-take-all action selection in
the basal ganglia
Prefrontal
cortex
Decision
making
Winner-take-all networks
Recurrent networks coupled via lateral
inhibition
Prefrontal
cortex and
basal ganglia
Routing of
information flow
Context-dependent
tuning of activity in
recurrent network
dynamics
Recurrent networks implementing line
attractors and selection vectors
Common
across many
cortical areas
Working
memory
Continuous or discrete
attractor states in
networks
Persistent activity in recurrent
networks
Prefrontal
cortex
Representation
and
transformation
of variables
Population coding
Time-varying firing rates of neurons
representing dot products with basis
vectors, nonlinear
Motor cortex
and higher
cortical areas
!
Significance. Aim 2 pushes hard against the limits of current knowledge about the dynamic
regularities in training sets, in the role of the learner in creating those training sets, and the kind of
learning algorithms that can make use of that structure. But this is where we have to go if we want
to understand visual learning, as involves top-down effects related to expectation and attention, as
well as active exploration of a scene through a sequence of eye movements and through motor
manipulations of the world. With the recurrent loop expanded to pass through the environment,
these processes sample different parts of a scene sequentially, potentially selectively sampling the
most relevant information for learning.
10
11 Aim 3 –PERCEPTS AND CONCEPTS
Collaborative Team: Behavioral studies: Smith, Landy, Goldstone, James; Analysis of learning:
Jones, Smith, Yu; Machine Learning: Natarajan Crandall, White, new hires
Feed-forward neural networks are very good at finding categories of things but they are not good at
figuring out why categories have the structure they do nor at reasoning from that knowledge. But
humans are very good at this. For example, 7 year old children readily generalize multi-digit number
names to written form – knowing that “five hundred and forty six” refers to “546” and not to “456”,
even if they have never seen that number before. Feed forward deep learning nets are not good at
this. But another powerful form of machine learning is variational or Bayesian modeling. Bayesian
modeling based Bayesian statistics aims to capture the processes assumed to have generated the
data leading to models that can explain and reason from the structure in the data (9, 10, 21). Such
models are not at all brain like and do not learn in the usual sense, incrementally improving over
time. Instead, they make inferences over probability distributions. However, there is a growing
push (e.g., 57-61) and growing ideas about how to take the best of these two approaches to yield a
more complete theory of learning. For example, one approach uses analysis by synthesis. A
bottom-up classification model and a top-down generative model are concurrently learned so as to
best represent the distribution of the inputs in a maximum-likelihood sense. The learning can be
performed using the wake–sleep algorithm (61). In the wake phase, the recognition model
“perceives” training images, and the generative model learns to better reconstruct these images
from their internal representations. In the sleep phase, the generative model “dreams” of images,
and the recognition model learns to better infer the internal representations from the images. By
alternating wake and sleep phases, the two models co-adapt and jointly discover a good
representation for the distribution of images used in training. Other ideas blend prediction,
reinforcement, and top-down training (see Table 1). All these approaches embrace the idea that
perceptual and conceptual learning are products of the same learning machinery. This idea fits the
behavioral research. For example, Landy & Goldstone (45) have shown that good mathematical
reasoners become proficient at mathematics not by trumping or overruling perception and action in
favor of abstract reasoning, but rather by adapting these concrete processes to better fit
mathematical needs. For example, good mathematical reasoners tend to have their visual attention
naturally drawn to the multiplication of 3 by 4 in “5+3×4” which leads them to correctly calculate 17
rather than 32, which they would get if the did the 5+3 operation first.
We will conduct two long term training studies with first and second graders on reading and
writing multi-digit numbers and one with seventh to ninth graders on algebra. Both will use training
apps, and notably the algebra training app developed by Landy and Goldstone (19, 20) called
Graspable Math (GM) (http://graspablemath.com), which allows users to interact in real-time with
math notation using intuitive as well as trained perception-action processes. In the next year this
tutoring system will be used with 100 6th grade students in Australia, all 7-8th grade algebra classes
in Henrico County, Virginia, 4 math classes from 7-9th grades in Monroe County School Corporation,
and more than 500 students who annually visit the Math Assistant Center (MAC) at IUPUI in
Indianapolis. Here we will use this guided training to track changes in perceptual learning
(recognizing forms, reading handwritten or crowded forms, and measuring both speed and
accuracy) and also measuring conceptual knowledge – the principles of base 10 notation (that the 5
in 456 means 5 sets of 10 items) and explicit understanding of the rules of algebraic decomposition,
including the generation of equations.
The training materials will be used to train symbolic deep-learners. In particular,
based on the tasks given to students in classroom contexts, we will create large corpora of symbolic
mathematical expressions with goal states. For the multi-digit number reading training, input-output
pairs would include “82 -> eighty-two” and “four hundred seventy-six -> 476.” For the algebraic
reasoning training, pairs would include “4x+2=6 -> x=2” and “Kayla has 3 dogs and each dog has 4
collars. How many collars are there? -> collars=3X4.” By employing machine learning techniques
including variational methods, analysis by synthesis, Deep Boltzmann machines, Deep Belief
11
12 Networks, Stacked Auto-Encoders, as well as Advice Seeking systems (62-65), the goal of training
would be for the networks to develop internal representations that capture important regularities in
the input patterns – regularities that allow the networks to solve the trained problems as well as
previously unseen transfer problems. By training machines like these to solve these problems, we
gain insights into what are the actual regularities present in the training corpus, which can be used
to guide human mathematics instruction, and we fundamentally augment our ability to automatize
the discovery process in highly structured STEM domains.
Significance of Aim 3. Aim 3 attempts to close the big divide in machine learning and in theories
of human cognition – between learning about the training materials to representing the regularities
in the training set in terms of the principles or function that could generate those instances. The
research will also provide new insights into the development of mathematical symbol systems and
have direct impact on educational practices. The work will provide evidence at the heart of
developmental theory – how early experiences pave the way for downstream effects and how
development is much more than teaching to a test: early learning that is bottom-up and perceptual
might be part of the process for the advanced conceptual learning that finds and reason from the
underlying principles and functions.
D. Timetable
Aim 1
Year 1
Year 2
Measuring
competencies
Analyses
Training
Machine
learning
preliminary
experiments
Aim 2
Capturing self
generated data
sets
Year 3
Year 4
Analyses
Machine
learning
experiments
Analyses
Training
Analyses
Machine
learning models
Aim 3
Excellence
Aims
Data collection
Hire 1 new
faculty
Machine
learning models
Hire 2 new
faculty
Learning
seminar
Submit
research grant
(Aim 1)
12
Submit
research grant
(Aim 3)
Submit
Research grant
(Aim 2)
13 Submit
training grant
E. Significance and Impact
There are fields of contemporary research that seem on the edge of science fiction – machines that
learn like children, algorithms in the brain, and rewiring and returning the brain with precision
training – that are not yet realities. But we are close and the consequences for people – for
education, for remediating the unequal environments that, by preschool, may prevent children from
having a fair chance at life – are great. Building computational prowess that can help people solve
hard and important problems such as cancers and climate change also require building systems
that can learn – that can find regularities – from complex data sets. For all these reasons, we need
a field that studies learning by going deep, beyond what a first grade teacher or grandmother knows
(study, practice, work hard). Instead, we need to know how learning builds on itself, how all the
learning and tasks we do drive change, the kind of learning mechanisms recruited by different
properties of different training sets, and how learners go from knowing the training material to
forming the abstract principles in a domain that enables us to be inventive, to think, to be scientists.
This is what we are trying to do. The task will be hard, but the contributing fields are ready.
F. Future Funding/Sustainability
This area of research is sustainable in two ways. First the faculty salaries are justifiable by their
contribution to teaching. Psychological and Brain Sciences has one of the largest undergraduate
majors and enrollments in the College and the Informatics attracts and is expected to continue to
attract many majors. The three proposed faculty will be regular tenure track faculty who will teach
as well as do research. Indeed, it is arguable that this new emerging area will increase enrollments
in both majors as this is a high impact and high profile area of concentration for which there are
many jobs. We are also confident in the interest of funding agencies in this research domain. There
have been increasing calls for proposals for just this area of research: the NSF “Learning” call as
described earlier plus their Computational Cognition initiative which reads in part “The National
Science Foundation (NSF) is interested in receiving proposals to existing programs, listed below,
that explore computational models of human cognition, perception and communication and that
integrate considerations and finding across disciplines. Proposals submitted to programs in SBE
should include a rigorous computational context, and proposals submitted to programs in CISE
should include a rigorous cognitive context. Also targeting this precise area is the - Explainable AI
initiative the human-aware data driven modeling (D3M) and the Communicating with Computers
initiative, all through DARPA. Also making a call for convergent work on learning across these fields
is DOD initiative BAA-RQKH-2015-0001 Methods and technologies for personalized learning,
modeling and assessment.
Because the visual symbol focus is highly relevant to STEM education, submissions to IES,
NSF and NIH are also planned. These sources are particularly relevant to Aim 3. Relevant
foundations include James S. McDonnell Foundation under the two current funding programs:
Understanding Human Cognition and Studying Complex Systems, the Nuffield foundation
(mathematics education) and the Whitehall foundation (neuroscience). We also plan to submit a
graduate training grant to NSF’s Research Traineeship Program and have already discussed this
plan with program officers. Our collective effort and long term history of external funding provides
quite clear evidence that we can external funding. We expect to obtain multiple grants and to
continue to so over the next decade.
13
14 G. New positions proposed: The speed of advances in machine learning, particularly in the form
of deep learning networks has been astounding, occurring in less than 10 years. These new
machine learning approaches both mimic aspects of human brains and they learn from real world
and enormous data sets, data sets not unlike the 20,000 words that a child hears a day or the 12
hours a day of viewing scenes filled with objects. These statistical learning engines, like humans,
learn from weak supervision. The integrative use of these networks in the context of understanding
how brains and people learn is new. Accordingly, although Indiana has recently been hiring
machine learners (Crandall, Ryoo, White, and Natarajan and cognitive psychologists like Jones and
Yu with machine learning backgrounds), IU is understaffed in this emerging area needs to load up
in this area if it is to be in the game in future science (and in training). This is a real gap for the
university in general and it is certainly a significant for this proposal. Because the advances,
approach, and potential are new, the game-changing machine learners nationally and
internationally are more the most part very young (within 5 years of their PhDs). This is who we
need to hire, and there are many exciting new PhDs emerging who are being trained in jointly in
cognitive psychology and machine learning and in human neuroscience and machine learning at
Stanford, Berkeley, MIT, NYU, Colorado, among other top schools. For this project we seek to hire
these young individuals who will bring with them the new expertise and perspective to add to the
excellence that already here. We specifically seek:
(1) A computational neuroscientist, ideally in vision, with a special interest in visual learning and in
deep learning networks. Given the short lifespan for some neuroimaging technologies, a strong
preference will be given to individuals demonstrating versatility in their science and strong
formal and theoretical grounding.
(2) A machine learning researcher working at the front-end of current knowledge – that is, in
symbolic deep nets or recurrent multi-layered networks. The ideal candidate for this position
would have demonstrated research interests not limited to optimization of learning, but would
also be motivated to compare machine and human learning mechanisms, and propose learning
algorithms that learn in human-like ways.
(3) A theoretician of complex multi-layered networks adept at formal analysis of why and how these
networks work as they do and, ideally, with an interest in how they relate to emerging principles
within human neuroscience. The primary criterion for hiring in this position will be research
excellence in artificial or natural neural networks. An additional important consideration will be
the individual’s ability to contribute to training in and teaching of methods necessary for
conducting research in learning in networks: linear algebra, advanced statistics, computer
programming, robotics, and data science.
.
H. IU and External Collaborative Arrangements: NONE
I. Metrics and Deliverables: In assessment and evaluation, it is important to distinguish measures
of the activities in the program from measure of the outcomes -- what one hopes to foster by the
program activities, and of being explicit about what are and are not measurable goals. The figure
below outlines the components to be evaluated: the research programmed partitioned into inputs
(the investment from IU), the activities we will with that investment, and the outputs (that is, the
specific deliverables from these activities). We then consider the outcomes these activities are
designed to foster – measured with respect to the short-term, mid-term, and long-term goals. The
measures –both surveys of participants and objective measures – for each component are listed at
the bottom of the figure. To evaluate the Program, we evaluate the inputs into the program through
objective measures Outcomes measures assess what these activities were designed to foster –
placement, sustained contributions to research and integrative research that helps define the new
14
15 field. For these, the planned measures are objective achievements as listed in the figure. The
activities and objectively measured outcomes are listed in the figure below.
The'Program'
The&Research&Program&
Ul7mate&&
UlHmate'Goals'
goals&
Outcomes'
Outcomes&
Inputs'''''''''''''''''AcHviHes'''''''''''''''''''''Outputs''''''''''''''''''''ShortPterm'''''''''''''''''MidPterm''''''''''''''''''''LongPterm''''''''''''''''''Unmeasured''
''''''''''''''''''''''''''''''''''''''''''''''''
Funding'
Hires&
&
&
Postdocs&
Recruit'
&
Trainees'
Grad&&
Students&
&
Students'
Speakers&
apply,'
&
accept,''
Research&$&
Courses'
Workshops'
Research&
Colloquia'
&
Collabora7ons&
Trainees’'
&
Commi5ees'
Joint&mentoring&
Conferences'
&
CollaboraHons'
Conference&&
RotaHons'
Presenta7ons&
Research'
&
'
&
Trainee'PorJolios'
Papers&submiCed&
&
PublicaHons'
Grants&submiCed&
Conference'
&
PresentaHons'
Student&
Grants'submi5ed'
enrollment&
&
Student’s'
&
Retained'in''
Program'
a5end'
Measures'
Data'on'
Filled&
applicants,'
posi7ons&
acceptance'
& rates'
(#,Quality,'
Data&
Diversity'
collected&
Undergraduate'
&programs,'URGs)'
Experiments&
started,&
completed&each&
aim&
Coursework'
ParHcipaHon'Training'
Mentoring&team&
Events'
mee7ngs&
Faculty'Surveys'
Seminar&speakers&
Trainee'Survey'
invited&
Conference&
presenta7ons&
&
Collabora7ve&
presenta7ons&
&
&
&
Covariance'structure'of'
&
porJolios'and'their'
#&funded&grants&
predicHve'relaHons'to'
#collabora7ve&
publicaHons,'
published&papers&
presentaHons,'grants,'
#students&in&EAR&
trainee'saHsfacHon'
&
&
&
(survey)'and'retenHon'
Increase&&faculty&
Infrastructure'for'
Model'Training'
&&PhD&students&
Program'
in&the&EAR&
&
Papers&
Placement'
Of'Trainees'
published&
&
Trainees'produce'
Grants&received&
integraHve'
&
developmental'
Mentees&placed&
research'
&
Increase'underP
Invited&talks&
represented'groups'in'
&
program'
#&faculty&PHD&in&
EAR&
Faculty'collaboraHon'
&
#'trainees'in'Research'
Total&$&grants&
#'crosscuRng''pubs'
received&
diversity'trainees'and'
faculty'
#keynote&and&
invited&talks&
&
Cita7on&counts&
alife&xv&
New&algorithms&
&
Trainee'and'faculty'
Clear&precision&
careers'(lifeHme'
impact)'
predic7ons&about&
through,'research'
learning&
teaching'and'science'
&
policy'
Recogni7on&a&
center&of&
excellence&
Awards&
CitaHon'counts'
Par7cipa7on&
Invited'Talks'
Awards'
na7onal/
ParHcipaHon'NaHonal'
interna7onal&&
InternaHonal'Science'
research&
CommuniHes'
communi7es&
More&complete&
understanding&of&
Wider'use'of'dataP
learning&
analyHc'and'
computaHonal'methods'
&
A&new&field&of&
learning&
Increase'Data'
&
Transparency,'Sharing,''
More&effec7ve&
educa7on&&
&
IntegraHve'Research'
Powerful&
algorithms&that&
learn&form&the&
experiences&of&
children&&
More'complete'
understanding'of'
developmental'process'
and'effecHve'preP
empHon,'prevenHon'and'
treatment'of'
developmental'disorders'
Trends'in'publicaHons'
and'grants'by'key'words,'
Trends&in&na7onal&
counts,''increased'open'
research&(by&key&
data'sets'
works)&
A&theory&of&learning&
Educa7onal&
ini7a7ves&based&on&
learning&principles&
&
Citations
1. Turing, A. M. (1950). Computing machinery and intelligence. Mind, – A Quarterly Review of
Psychology and Philosophy 59(236), 433-460.
2. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous
activity. The bulletin of mathematical biophysics, 5(4), 115-133.
3. Kriegeskorte, N. (2015). Deep neural networks: A new framework for modeling biological vision
and brain information processing. Annual Review of Vision Science, 1, 417-446.
4. Minsky, Marvin (1974), A framework for representing knowledge, Artificial Intelligence Memo
No. 306, MIT, Artificial Intelligence Laboratory.
5. Byrge, L., Sporns, O., & Smith, L. B. (2014). Developmental process emerges from extended
brain–body–behavior networks. Trends in cognitive sciences, 18(8), 395-403.
6. DE Rumelhart, JL McClelland, the PDPresearch group (Eds.), (19860 Parallel distributed
processing: Explorations in the microstructure of cognition. Volume I: Foundations, MIT Press,
Cambridge, MA.
7. Cadieu, Charles F., Ha Hong, Daniel LK Yamins, Nicolas Pinto, Diego Ardila, Ethan A.
Solomon, Najib J. Majaj, and James J. DiCarlo. "Deep neural networks rival the representation
of primate IT cortex for core visual object recognition." PLoS Comput Biol 10, no. 12 (2014):
e1003963.
15
16 8. Marblestone, A., Wayne, G., & Kording, K. (2016). Towards an integration of deep learning and
neuroscience. arXiv preprint arXiv:1606.03813.
9. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind:
Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.
10. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning
through probabilistic program induction. Science, 350(6266), 1332-1338.
11. Schank, Roger C. "Conceptual dependency: A theory of natural language understanding."
Cognitive psychology 3, no. 4 (1972): 552-631.
12. Smith, L., & Gasser, M. (2005). The development of embodied cognition: Six lessons from
babies. Artificial life, 11(1-2), 13-29.
13. Hornby, G. S. (2007). Modularity, reuse, and hierarchy: measuring complexity by measuring
structure and organization. Complexity, 13(2), 50-61.
14. Goldstone, R. L., Marghetis, T., Weitnauer, E., Ottmar, E. R., & Landy, D. (in press). Adapting
Perception, Action, and Technology for Mathematical Reasoning. Current Directions in
Psychological Science.
15. Verdine, B. N., Golinkoff, R. M., Hirsh-Pasek, K., & Newcombe, N. S. (2014). Finding the
missing piece: Blocks, puzzles, and shapes fuel school readiness. Trends in Neuroscience and
Education, 3(1), 7-13.
16. Lourenco, S. F., Bonny, J. W., Fernandez, E. P., & Rao, S. (2012). Nonsymbolic number and
cumulative area representations contribute shared and unique variance to symbolic math
competence. Proceedings of the National Academy of Sciences, 109(46), 18737-18742.
17. Gershkoff-­‐‑Stowe, L., & Smith, L. B. (2004). Shape and the first hundred nouns. Child
development, 75(4), 1098-1114.
18. Smith, L. B., Jones, S. S., Landau, B., Gershkoff-Stowe, L., & Samuelson, L. (2002). Object
name learning provides on-the-job training for attention. Psychological Science, 13(1), 13-19.
19. Goldstone, R. L., de Leeuw, J. R., & Landy, D. H. (2015). Fitting Perception in and to Cognition.
Cognition, 135, 24-29.
20. Landy, D., Allen, C. & Zednik, C. (2014). A perceptual account of symbolic reasoning. Frontiers
in Psychology, 5, 275. doi: 10.3389/fpsyg.2014.00275
21. Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic
models of cognition: Exploring representations and inductive biases. Trends in cognitive
sciences, 14(8), 357-364.
22. McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S.,
& Smith, L. B. (2010). Letting structure emerge: connectionist and dynamical systems
approaches to cognition. Trends in cognitive sciences, 14(8), 348-356.
23. Hinton, G.E. & Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks.
Science 313, 504–507 (2006).
24. Gal, Y., & Ghahramani, Z. On Modern Deep Learning And Variational Inference. Advances in
Neural Information Processing Systems Workshop on Advances in Approximate Bayesian
Inference, 2015.
25. Bambach, S., Crandall, D. J., & Yu, C. (2013, August). Understanding embodied visual attention
in child-parent interaction. In 2013 IEEE Third Joint International Conference on Development
and Learning and Epigenetic Robotics (ICDL) (pp. 1-6). IEEE.
26. Smith, L. B., Yu, C., & Pereira, A. F. (2011). Not your mother’s view: The dynamics of toddler
visual experience. Developmental science, 14(1), 9-17.
27. James, K. H., Jones, S. S., Swain, S., Pereira, A., & Smith, L. B. (2014). Some views are better
than others: evidence for a visual bias in object views self-­‐‑generated by toddlers.
Developmental science, 17(3), 338-351.
28. James, K. H. (2010). Sensori-­‐‑motor experience leads to changes in visual processing in the
developing brain. Developmental science, 13(2), 279-288.
29. Byrge, L., Smith, L. B., & Mix, K. S. (2014). Beginnings of Place Value: How Preschoolers Write
Three-­‐‑Digit Numbers. Child development, 85(2), 437-443.
30. Ottmar, E., Weitnauer, E., Landy, D., & Goldstone, R. (2015). Graspable mathematics: Using
perceptual learning technology. Integrating Touch-Enabled and Mobile Devices into
Contemporary Mathematics Education, 24.
16
17 31. Ahissar, M., Nahum, M., Nelken, I., & Hochstein, S. (2009). Reverse hierarchies and sensory
learning. Philosophical Transactions of the Royal Society of London B: Biological Sciences,
364(1515), 285-299.
32. Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning.
Trends in cognitive sciences, 8(10), 457-464.
33. Saxe, A. M., McClelland, J. L., & Ganguli, S. (2013). Exact solutions to the nonlinear dynamics
of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.
34. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
35. Yu, C., & Smith, L. B. (2016). The social origins of sustained attention in one-year-old human
infants. Current Biology, 26(9), 1235-1240.
36. Clerkin, E., Hart, E., Regh, J., Yu, C. and Smith, L.B. (in press) Real world visual statistics and
infants’ first learned object names. Philosophical Transactions of the Royal Society B.
37. Pereira, A. F., James, K. H., Jones, S. S., & Smith, L. B. (2010). Early biases and
developmental changes in self-generated object views. Journal of vision, 10(11), 22-22.
38. James, K. H., Jones, S. S., Smith, L. B., & Swain, S. N. (2014). Young children's self-generated
object views and object recognition. Journal of Cognition and Development, 15(3), 393-401.
39. Smith, L. B., Street, S., Jones, S. S., & James, K. H. (2014). Using the axis of elongation to
align shapes: Developmental changes between 18 and 24months of age. Journal of
experimental child psychology, 123, 15-35.
40. Goodfellow, I., Lee, H., Le, Q. V., Saxe, A., & Ng, A. Y. (2009). Measuring invariances in deep
networks. In Advances in neural information processing systems (pp. 646-654).
41. Kulkarni, T. D., Whitney, W. F., Kohli, P., & Tenenbaum, J. (2015). Deep convolutional inverse
graphics network. In Advances in Neural Information Processing Systems (pp. 2539-2547).
42. Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N.
S. (2013). The malleability of spatial skills: a meta-analysis of training studies. Psychological
bulletin, 139(2), 352.
43. James, Karin H., and Laura Engelhardt. "The effects of handwriting experience on functional
brain development in pre-literate children." Trends in neuroscience and education 1, no. 1
(2012): 32-42.
44. Landy, D., Brookes, D., & Smout, R. (2014). Abstract numeric relations and the visual structure
of algebra. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1404.
45. Landy, D., & Goldstone, R. L. (2007). Formal notations are diagrams: Evidence from a
production task. Memory & Cognition, 35(8), 2033-2040.
46. Marcus, G., Marblestone, A., & Dean, T. (2014). The atoms of neural computation. Science,
346(6209), 551-552.
47. Eliasmith, C. (2013). How to Build a Brain: A Neural Architecture for Biological Cognition (p.
456). Oxford University Press.
48. Eliasmith, C., & Anderson, C. H. (2004). Neural Engineering: Computation, Representation, and
Dynamics in Neurobiological Systems (p. 356). MIT Press.
49. James, K. H., & Maouene, J. (2009). Auditory verb perception recruits motor systems in the
developing brain: an fMRI investigation. Developmental Science, 12(6), F26-F34.
50. Butler, A. J., James, T. W., & James, K. H. (2011). Enhanced multisensory integration and
motor reactivation after active motor learning of audiovisual associations. Journal of Cognitive
Neuroscience, 23(11), 3515-3528.
51. Schwartze, M., & Kotz, S. A. (2013). A dual-pathway neural architecture for specific temporal
prediction. Neuroscience & Biobehavioral Reviews, 37(10), 2587-2596.
52. Pestilli, F., Yeatman, J. Rokem, A., Kay, K. and Wandell, B. (2014) Evaluation and statistical
inference in living connectomes. Nature Methods. doi:10.1038/nmeth.3098
53. Zheng, C. Pestilli, F., and Rokem, A. (2014) Deconvolution of High Dimensional Mixtures via
Boosting, with Application to Diffusion-Weighted MRI of Human Brain. Neural Information
Processing Systems (NIPS).
54. Vales, C., & Smith, L. B. (2015). Words, shape, visual search and visual working memory in 3-­‐‑
year-­‐‑old children. Developmental science, 18(1), 65-79.
55. Bambach, S., Crandall, D. J., Smith, L. B., & Yu, C. Active Viewing in Toddlers Facilitates Visual
Object Learning: An Egocentric Vision Approach.
17
18 56. Gori, I., Aggarwal, J. K., Matthies, L., & Ryoo, M. S. (2016). Multitype Activity Recognition in
Robot-Centric Scenarios. IEEE Robotics and Automation Letters, 1(1), 593-600.
57. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114.
58. Kulkarni, T. D., Whitney, W. F., Kohli, P., & Tenenbaum, J. (2015). Deep convolutional inverse
graphics network. In Advances in Neural Information Processing Systems (pp. 2539-2547).
59. Kim, Sangwook, Minho Lee, and Jixiang Shen. "A novel deep learning by combining
discriminative model with generative model." In 2015 International Joint Conference on Neural
Networks (IJCNN), pp. 1-6. IEEE, 2015.
60. Stoianov, I., & Zorzi, M. (2012). Emergence of a'visual number sense'in hierarchical generative
models. Nature neuroscience, 15(2), 194-196.
61. Frey, B. J., Hinton, G. E., & Dayan, P. (1996). Does the wake-sleep algorithm produce good
density estimators?. In Advances in neural information processing systems (pp. 661-670).
MORGAN KAUFMANN PUBLISHERS.
62. Bengio, Y. (2009). Learning deep architectures for AI. Foundations and trends® in Machine
Learning, 2(1), 1-127.
63. Navdeep Kaur, Tushar Khot, Kristian Kersting and Sriraam Natarajan, Relational Restricted
Boltzmann Machines, International Conference on Data Mining (ICDM), 2015.
64. Phillip Odom, and Sriraam Natarajan, Actively Interacting with Experts: A Probabilistic Logic
Approach, European Conference on Machine Learning and Principles of Knowledge Discovery
in Databases (ECMLPKDD) 2016.
65. Phillip Odom, Raksha Kumaraswamy, Kristian Kersting, and Sriraam Natarajan, Learning
through Advice-Seeking via Transfer, International Conference on Inductive Logic Programming
(ILP), 2016.
18
Personnel
David Crandall 20% effort – Associate Professor, School of Informatics and
Computing. Expertise: machine learning, computer vision, deep learning nets,
structure from motion, analyzing and modeling large amounts of uncertain data,
egocentric vision, human vision
Robert Goldstone 20% effort –Professor, Psychological and Brain Sciences,
Program in Cognitive Science. Expertise: Human cognition and perceptual
learning, cognitive modeling, large set data collection –crowd-sourced, in
educational settings, online, large data set analysis.
Karin James 20% effort—Associate Professor, Psychological and Brain
Sciences, Program in Neuroscience. Expertise: neuroimaging,
neurodevelopment, perceptual and motor learning, visual object recognition.
Michael Jones 20% effort – WK Estes Professor, Psychological and Brain
Science, Program in Cognitive Science. Expertise: Computational models of
memory and language, attention in reading and visual navigation, artificial
intelligence, machine learning, data mining.
David Landy 20% effort – Assistant Professor, Psychological and Brain
Sciences, Program in Cognitive Science. Expertise: Computational and
theoretical approaches to formal reasoning, mathematical cognition and
perception, numerical reasoning, distributed cognition
Sriraam Natarajan 20% effort - Associate Professor, School of Informatics and
Computing. Expertise: Machine Learning, Artificial Intelligence, Relational
Learning, Reinforcement learning, Graphical Models, Continuous Time Bayesian
Networks
Franco Pestilli 20% effort – Assistant Professor Psychological and Brain
Sciences, Program in Neuroscience, Program in Cognitive Science. Expertise:
Human neuroscience, vision, model based neuroanatomy, perceptual decision
making and learning.
Michael Ryoo 20 % effort – Assistant Professor, Computer Science and
Informatics. Expertise: Egocentric vision, computer vision, machine learning
from video, stochastic representation, robotics
Linda Smith 20% effort –Distinguished Professor Psychological and Brain
Sciences, Program in Cognitive Science. Expertise: cognitive, perceptual, motor
development in infants and children, cognitive modeling, large and temporally
dense data collection and analysis, experimental design, developmental theory,
egocentric vision, development of visual object recognition, word learning.
35
Olaf Sporns 20% effort – Distinguished Professor, Psychological and Brain
Sciences, Program in Cognitive Science, Program in Neuroscience. Expertise:
Computational neuroscience, connectomics, cognitive function in distributed
networks, graph theory, perceptual and motor learning
Martha White 20% effort – Assistant Professor, Computer Science and
Informatics. Expertise: Machine learning from large data sets, re-inforcement
learning, representational learning, artificial intelligence
Chen Yu 20% effort—Professor, Psychological and Brain Sciences, Program in
Cognitive Science, School of Informatics and Computing. Expertise: Statistical
learning, word learning, egocentric vision, large and temporal data sets,
wearable sensors, data mining, perceptual, motor and word learning.
36
OMB No. 0925-0001/0002 (Rev. 08/12 Approved Through 8/31/2015)
BIOGRAPHICAL SKETCH
Provide the following information for the Senior/key personnel and other significant contributors.
Follow this format for each person. DO NOT EXCEED FIVE PAGES.
NAME: Smith, Linda B.
eRA COMMONS USER NAME (credential, e.g., agency login): [email protected]
POSITION TITLE: Distinguished Professor Psychological and Brain Sciences
EDUCATION/TRAINING (Begin with baccalaureate or other initial professional education, such as nursing,
include postdoctoral training and residency training if applicable. Add/delete rows as necessary.)
INSTITUTION AND LOCATION
University of Wisconsin- Madison
University of Pennsylvania
DEGREE
(if
applicable)
Completion
Date
MM/YYYY
B.S.
05/73
Psychology
Ph.D.
09/77
Cognitive Psychology
FIELD OF STUDY
A. Personal Statement
For over 30 years I have studied early perceptual, motor, and cognitive development with an emphasis on
how these processes influence infants and toddlers learning of their first object names and object
categories. My research program has been continuously funded since my first grant in 1978 (funded
by NSF) by NIH and/or NSF since. I have published over 200 papers. Specifically relevant to this
proposal is my program of research studying the role of attention, perception and action, and statistical
learning in early word learning, my recent research on ego-centric vision in infants and toddlers, the study
of the development of visual object recognition. I am PI of an Institutional Training Grant (NICHHD, 5
predoctoral lines, 3 post-doctoral lines now in year 22).
B. Positions and Selected Honors
Professor (1997), Distinguished Professor (2007) Psychological and Brain Sciences, Indiana University
Bloomington. James McKeen Cattell Sabbatical Award, 1984, Research Career Development Award NICHD, 1984-1989, Early Career Contribution, American Psychological Association, 1985, Lilly Faculty
Open Fellowship, 1993-94, Tracy M. Sonneborn Award (Indiana University) 1997, Society of Experimental
Psychologists, 2005, Fellow, American Psychological Society, 2006, Fellow, American Academy of Arts
and Sciences, 2007,Fellow, Cognitive Science Society, 2008. 2013 APA Award for Distinguished Scientific
Contributions. 2013 Rumelhart Prize, Cognitive Science Society, 2014 Henry R. Besch, Jr. Promotion of
Excellence Award (Indiana University).
National Science Foundation, Memory & Cognitive Processes Panel, 1983-86 , National Science
Foundation, Advisory Committee for the Directorate for Biological, Behavioral and Social Sciences, 19891991, National Institute of Mental Health, Cognition, Emotion and Personality Panel, 1989-1993, Forum
for Federal Research Management, 1992 – 1994, National Institutes of Health, Scientific Review Advisory
Committee, 1997, National Institutes of Health, Study Section (LCOM), 2002- 2006, Reviewer Pioneer
Grants, 2010, Early Investigator Awards, 2012, 2014, T32 review panel NICHD 2014, 2015, Advisory
Board for the Directorate for Social, Behavioral and Economic Sciences, National Science Foundation,
(2015-2018).
C. Contribution to Science
My earliest contributions focused on a unified account of developmental differences in perceptual
categorization and perceived similarity of multidimensional stimuli. The work culminated in a mathematical
37
model of attention and discrimination that explained developmental changes as well as a larger pattern of
phenomena in the adult literature understood under the framework of integral and separable dimensions. The
theoretical framework has relevance to fundamental problems in category learning, in the development of
executive control and in the perception of number.
1. Smith, L. B. (1989) A Model of Perceptual Classification in Children and Adults. Psychological
Review, 96(1), 125-144.
2. Colunga, E. &Smith, L. B. (2005) From the Lexicon to Expectations About Kinds: A Role for
Associative Learning. Psychological Review, 112(2).
3. Hanania, R. & Smith, L. B. (2009) Selective attention and attention switching. Developmental
Science, 1-14.
4. Cantrell, L. & Smith, L. B. (2013) Open questions and a proposal: A critical review of the evidence
on infant numerical abilities. Cognition, 128(3), 331-352.
My second contribution emerged from this earlier work on perceptual classification (and is deeply informed by
that work) but focused on how very early word learners were biased to used some dimensions over others
when generalizing newly learned words. The phenomenon –sometimes known as the shape bias – suggests
that children learn the regularities the underlie classes of categories (artifacts versus substances, for example)
and then use these regularities to generalize the name of one learned instance to the whole category. The
research –still ongoing – showed how the shape bias depended on word learning, how it supported future word
learning, how it was delayed in children with language delay, and how it varied with the linguistic properties of
the specific language being learned. My current research is focused on how the development of the shape bias
depends on prior advances in visual object recognition, advances, which in turn depend on the visual
experiences generated by object manipulation.
1. Smith, L.B., Jones, S.S., Landau, B., Gershkoff-Stowe, L. & Samuelson, S. (2002) Early noun
learning provides on-the-job training for attention Psychological Science, 13, 13-19. PMID:
11892773.
2. Smith, L. B. (2003) Learning to Recognize Objects. Psychological Science, 14 (3)244- 251. PMID:
12741748.
3. Smith, L. B. (2009). From fragments to geometric shape: Changes in visual object recognition
between 18 and 24 months. Current Directions in Psychological Science, 18(5), 290-294.
PMCID: PMC2888029.
4. Smith, L. B. & Jones, S. (2011) Symbolic play connects to language through visual object
recognition. Developmental Science, 14, 1142-1149. PMID: PMC3482824.
A third contribution is in the domain of theory and developmental systems, of how development is multi-causal
dependent on complex interactions across levels of analysis and over multiple time scales.
1. Smith, L. B., Thelen, E., Titzer, R. & McLin, D. (1999) Knowing in the Context of Acting: The Task
Dynamics of the A-Not-B Error. Psychological Review, 106(2,) 235-260.
2. Smith, L. B. & Thelen, E. (2003) Development as a dynamic system. Trends in Cognitive Science,
7, 343-348.
3. Samuelson, L., Smith, L. B., Perry, L. & Spencer, J. (2011) Grounding Word Learning in Space.
PLoS One 6(12): e28095. PMCID: PMC3237424.
4. Byrge, L., Sporns, O. & Smith, L. B. (2014) Developmental process emerges from extended brainbody-behavior networks. Trends in Cognitive Sciences. PMCID: PMC4112155.
A fourth contribution concerns statistical learning, how very young word learners may learn word-referent
pairings by aggregating over individually ambiguous learning experiences.
1. Smith, L. B. & Yu, C. (2008) Infants rapidly learn word-referent mappings via cross-situational statistics.
Cognition, 106(3), 1558-1568. PMCID: 21585449
2. Yurovsky, D., Smith, L. B. & Yu, C. (2013) Statistical Word Learning at Scale: The Baby's View is Better
Developmental Science, 1-7. PMID: 24118720.
3. Yu, C. & Smith, L. B. (2012) Modeling Cross-Situational Word-Referent Learning: Prior Questions.
Psychological Review, 119(1), 21-39. PMCID: PMC3892274
4. Smith, LB., Suanda, S., & Yu, C. (2014) The unrealized promise of infant statistical word-referent
learning. Trends in Cognitive Sciences. PMCID: PMC4112155
38
The fifth contribution is the use of multimodal and dense real time measures to capture first-person –
egocentric—visual experiences of the 3-dimensional world as they move and actively explore their world and
interact with social partners. This work, using head cameras, head-mounted eye-trackers, and motion sensors
has revealed that infant and toddler visual experiences are fundamentally different in their content and in their
dynamics, in ways that play a critical role is learning about objects, in social interactions, and in object name
learning.
1. Smith, L. B., Yu, C., & Pereira, A. F. (2011) Not your mother's view: the dynamics of toddler visual
experience. Developmental Science, 14:1, 9-17. PMCID: 3050020
2. Pereira, A., Smith, L. B. & Yu, C. (2014) A Bottom-up View of Toddler Word Learning. Psychonomic
Bulletin & Review, 21, 178-185. PMCID: PMC3883952.
3. Yu, C. & Smith, L. B. (2012) Embodied Attention and Word Learning by Toddlers. Cognition, 125, 244262. PMCID: PMC3829203.
4. Yu, C. & Smith, L. B. (2013) Joint Attention without Gaze Following: Human Infants and Their Parents
Coordinate Visual Attention to Objects through Eye-Hand Coordination. PLoS One, 8(11):e79659.
doi:10.1371/journal.pone.0079659. PMCID: 3827436.
Complete list of published works since 2005 (selected publications since 1975) and PMCID numbers
since 2008: http://www.iub.edu/~cogdev/publications.html
D. Research Support
Current
R01HD 28675 (PI: SMITH)
08/01/1995 – 07/31/2017
NIH/NICHHD
The shape bias in children’s word learning.
The major goal of this grant, during the current funding period, is to understand the relation between early
changes in shape perception, category learning and early noun learning in typically developing children
and in late talkers.
R01 HD074601 (PI: Yu) 6/1/2013 – 5/31/2018
NICCHD
How the sensory motor dynamics of early parent-child interactions builds word learning
Role: co-PI (Smith, Bates co-PIs, PI Chen Yu)
This research seeks uses dual-head mounted eye-tracking in a longitudinal study of toddlers and parents
as they engage with and as parents name objects with the goal of understanding the links between
parental responsivity, real-time attentional coordination, and long term outcome in language development.
BCS 1523982 (PI Smith)
12/01/2015 – 11/30/2018
NSF
Comp Cog:Collaborative Research on the Development of Visual Object Recognition
This research uses head cameras to capture a corpus of in the home infant perspective scenes to study
the visual properties (and natural statistics) of scenes with respect to developmental changes in visual
object recognition.
DRL 1621093 (PI:Smith)
09/01/2016 - 6/31/2020
NSF
Collaborative Research: Using Cognitive Science Principles to Help Children Learn Place Value
Recently Completed
1R21HD068475-01A (PI: Smith)
01/08/2012 – 12/31/2013
NIH/EYE
Embodied attention in toddlers.
This research examines the role of head stabilization and eye-head alignment in stabilized visual attention,
and the role of spatially localized visual attention in the learning of 12 to 24 month olds.
Role: PI
39
David J. Crandall
School of Informatics and Computing
Indiana University
Bloomington, IN 47405
[email protected]
Professional Preparation
– The Pennsylvania State University, Computer Engineering B.S., 2001
– The Pennsylvania State University, Computer Science and Engineering M.S., 2001
– Cornell University, Computer Science M.S., 2007
– Cornell University, Computer Science Ph.D., 2008
Appointments
2016 –
Associate Professor, School of Informatics and Computing, Indiana University
Core faculty in Computer Science, Informatics, Cognitive Science, and Data Science
2010 – 2016 Assistant Professor, School of Informatics and Computing, Indiana University
2008 – 2010 Postdoctoral Associate, Department of Computer Science, Cornell University
2001 – 2003 Senior Research Scientist, Eastman Kodak Company
Five Related Products
– Sven Bambach, David Crandall, Linda Smith, and Chen Yu. Active viewing in toddlers facilitates visual object learning: An egocentric vision approach. In Annual Conference of the
Cognitive Science Society (CogSci), 2016. (Oral, 34% acceptance rate).
– Sven Bambach, Linda Smith, David Crandall, and Chen Yu. Objects in the center: How the
infant’s body constrains infant scenes. In IEEE International Conference on Development and
Learning and Epigenetic Robotics (ICDL), 2016.
– David Crandall and Chenyou Fan. Deepdiary: Automatically captioning lifelogging image
streams. In European Conference on Computer Vision International Workshop on Egocentric
Perception, Interaction, and Computing, 2016.
– Sven Bambach, Stefan Lee, David Crandall, and Chen Yu. Lending a hand: Detecting hands
and recognizing activities in complex egocentric interactions. In IEEE International Conference
on Computer Vision (ICCV), 2015. (Poster, 30.3% acceptance rate).
– Sven Bambach, David Crandall, and Chen Yu. Viewpoint integration for hand-based recognition
of social interactions from a first-person view. In ACM International Conference on Multimodal
Interaction (ICMI), 2015. (Poster, 41% acceptance rate).
Five Other Significant Products
– Mohammed Korayem, Robert Templeman, Dennis Chen, David Crandall, and Apu Kapadia.
Enhancing lifelogging privacy by detecting screens. In ACM CHI Conference on Human Factors in Computing Systems (CHI), 2016. Honorable Mention Award. (CHI Note, 23.4%
acceptance rate).
– Kun Duan, Dhruv Batra, and David Crandall. Human pose estimation through composite
multi-layer models. Signal Processing, 110:15–26, May 2015. (impact factor = 2.238).
– David Crandall, Andrew Owens, Noah Snavely, and Daniel Huttenlocher. SfM with MRFs:
Discrete-continuous optimization for large-scale structure from motion. IEEE Transactions on
40
Pattern Analysis and Machine Intelligence (PAMI), 35(12):2841–2853, December 2013. (impact
factor = 4.795).
– David Crandall, Lars Backstrom, Daniel Cosley, Siddharth Suri, Daniel Huttenlocher, and
Jon Kleinberg. Inferring social ties from geographic coincidences. Proceedings of the National
Academy of Sciences (PNAS), 107(52):22436–22441, 2010. (impact factor = 9.737).
– David Crandall, Lars Backstrom, Daniel Huttenlocher, and Jon Kleinberg. Mapping the world’s
photos. In International World Wide Web Conference (WWW), 2009. Best paper honorable
mention. (Oral, 13% acceptance rate).
Synergistic Activities
– Area Chair, IEEE Conference on Computer Vision and Pattern Recognition, 2016; Area Chair,
IEEE Winter Conference on Applications of Computer Vision, 2016; Co-Chair, International
Workshop on Social Web for Environmental and Ecological Monitoring 2016 (at AAAI International Conference on Weblogs and Social Media); Data Challenge Chair, ACM Web Science
Conference 2014; program committees of 38 other conferences and workshops.
– Associate Editor, Image and Vision Computing; Associate Editor, Digital Applications in Archaeology and Cultural Heritage; ad-hoc reviewer for 17 other journals.
– Supervised 20 undergraduate research projects including 8 NSF REUs, including a winner of
an IU Provost’s Award for Outstanding Undergraduate Research.
– Advise 7 PhD students, serve on 21 Ph.D. student committees; supervised 33 additional M.S.
and Ph.D. graduate independent study projects. Developed new courses in probabilistic graphical models and computer vision.
– Recent invited talks: IEEE CVPR Workshop on Egocentric Computer Vision (2016), UCLA
IPAM Workshop on Culture Analytics Beyond Text (2016), Science Europe Workshop on Assessment of Research for Resource Allocation (2015), ACM Multimedia Workshop on Geotagging and Applications (2014), IEEE Intl. Symposium on Technology and Society (2013).
– Serve on committees for: Informatics Undergraduate Curriculum, Undergraduate Research,
Diversity Celebration, LGBT Student Support Services outreach.
Collaborators and Co-Editors (28)
Denise Anthony (Dartmouth), Dhruv Batra (Virginia Tech), Johan Bollen (IU), Katy Borner (IU),
Kay Connelly (IU), Ying Ding (IU), Alyosha Efros (Berkeley), Geoffrey Fox (IU), John Franchak
(UC Riverside), Kristen Grauman (Texas), Salit Kark (University of Queensland), Apu Kapadia
(IU), Noam Levin (Hebrew University of Jerusalem), Yunpeng Li (Google), Luca Marchesotti
(Xerox), Andrew Owens (MIT), John Paden (Kansas), Maryam Rahnemoonfar (Texas A&M), Josef
Sivic (INRIA), Linda Smith (IU), Noah Snavely (Cornell), Emmanuel Munguia Tapia (Samsung),
Robert Templeman (NSWC Crane), Peter Todd (IU), Roman Yampolskiy (Louisville), Zhixian Yan
(Samsung), Jun Yang (Samsung), Chen Yu (IU).
Graduate and Postdoctoral Advisors (4)
Lee Coraor (PSU), Rangachar Kasturi (USF), Dan Huttenlocher (Cornell), Jon Kleinberg (Cornell).
Thesis Advisor and Postgraduate-Scholar Sponsor (8)
Current Ph.D. students (6): Sven Bambach, Chenyou Fan, Eman Hassan, Jangwon Lee, Jingya
Wang, Mingze Xu. Graduated Ph.D. students (3): Kun Duan, Mohammed Korayem, Stefan Lee,
Haipeng Zhang. Postgraduate-scholars: None.
2
41
Current grants and contracts
– (PI) U.S. Navy, “Advanced Computer Vision Analysis of Microelectronic Imagery,” 2016–2019,
$446,118.
– (PI) NSF Information Integration and Informatics (III), “CAREER: Observing the world
through the lenses of social media,” 2013–2018, $547,964 (with $48,000 REU supplements).
– (Co-PI) NSF Secure and Trustworthy Cyberspace (SATC), “TWC SBE: Medium: Collaborative: A Socio-Technical Approach to Privacy in a Camera-Rich World,” 2014–2018, $1.2 million
($800,000 for IU), with Apu Kapadia and Denise Anthony (Dartmouth).
– (Co-I) NSF DIBBs, “CIF21 DIBBS: Middleware and High Performance Analytics Libraries for
Scalable Data Science,” 2014–2019, $5 million, with Geoffrey Fox, Judy Qiu, Fusheng Wang
(Emory), Shantenu Jha (Rutgers), Madhav Marathe (Virginia Tech).
– (PI) for Indiana University’s subcontract from ObjectVideo, for Intelligence Advanced Research
Projects Activity (IARPA), on “Visual analysis for image geo-location,” 2012–2016, $359,009.
– (PI) IU Social Science Research Commons, “Big Data Approaches for Characterizing Urban
Landscapes and Inequality,” 2016–2017, $15,000, with Tom Evans (IU Geography).
– (Co-PI) IU Ostrom Grants Program, “Workshop on Egocentric Video: from Science to RealWorld Applications,” 2016-2017, $5,250, with Chen Yu and Linda Smith (IU Psychology).
Past grants and contracts
– (PI) Google Research Award, “Privacy-Enhanced Life-Logging with Wearable Cameras,” 2014–
2015, $45,800, with A. Kapadia.
– (PI) NVidia donation of Tesla two K40 boards (approximate value of $10,000).
– (PI) Google Travel Award, 2014, to attend Google I/O and Research at Google, $2,500.
– (Co-PI) Air Force Office of Scientific Research, ”Cloud-Based Perception and Control of Sensor
Nets and Robot Swarms,” 2013–2015, $400,000, with G. Fox, K. Hauser.
– (Co-PI) IU Collaborative Research Grant, “A Novel Multimodal Methodology to Investigate
Communicative Interactions Between Parents and Deaf Infants Before and After Cochlear Implantation,” 2013–2014, $67,000, with Derek Houston, Linda Smith, Chen Yu, David Pisoni,
Tonya Bergeson-Dana.
– (PI) IU Faculty Research Support Program, “Vision for Privacy: Privacy-aware Crowd Sensing
using Opportunistic Imagery,” 2012–2013, $49,990. Co-PI: Apu Kapadia.
– (Co-PI) NSF Human Centered Computing (HCC), “EAGER: Large Scale Optical Music Recognition on the International Music Core Library Project,” 2012–2013, $85,551 (plus REU supplement of $13,000). PI: Chris Raphael.
– (Co-PI) IU Faculty Research Support Program, “Understanding active vision and sensorimotor
dynamics in autistic and typically developing children,” 2011–2012, $75,000.00. PI: Chen Yu.
– (PI) Lilly Endowment, Inc. and Indiana University Data to Insight Center, “Mining photosharing websites to study ecological phenomena,” 2010 – 2011, $49,838.68.
3
42
OMB No. 0925-0001/0002 (Rev. 08/12 Approved Through 8/31/2015)
BIOGRAPHICAL SKETCH
Provide the following information for the Senior/key personnel and other significant contributors.
Follow this format for each person. DO NOT EXCEED FOUR PAGES.
NAME
POSITION TITLE
Goldstone, Robert
Chancellor’s Professor of Psychological and Brain
Sciences
eRA COMMONS USER NAME (credential, e.g., agency login)
EDUCATION/TRAINING (Begin with baccalaureate or other initial professional education, such as nursing, include postdoctoral training and
residency training if applicable.)
DEGREE
INSTITUTION AND LOCATION
MM/YY
FIELD OF STUDY
(if applicable)
Oberlin College
University of Illinois – Urbana Champaign
University of Michigan – Ann Arbor
B.A.
M.A.
Ph.D.
1986
1989
1991
Cognitive Science
Psychology
Psychology
NOTE: The Biographical Sketch may not exceed four pages. Follow the formats and instructions
below.
A. Personal Statement
My laboratory’s research is on how long-term learning, for example protracted perceptual learning, affects the
information that people extract from a particular learning situation. For example, Infants create perceptual
units from their experiences, and then use these formed units to construct their experiences that follow
(Needham, Goldstone, & Wiesen, 2014). Another example of our research bridging multiple scales is a
recurring thread that we humans gradually change our perceptual and attention systems to become better
high-level reasoners in math and science. We are interested in bridging putatively “low level” perceptual and
attention processes with higher-level cognition (Goldstone, Landy, & Brunel, 2011). Consistent with the
research thrust in applied education research, we have pursued three areas of application of our theorizing
about learning processes. First, we have designed tutoring systems in mathematics based on our “Rigged up
Perception and Action Systems” framework (Goldstone, Landy, & Son, 2010). Second, we have proposed
general principles like concreteness fading for improving educational outcomes (Fyfe, Mcneil, Son, &
Goldstone, 2014). Third, we have studied general factors related to similarity, ordering, presentation format,
and instructions that can speed students’ learning of concepts in science (Braithwaite & Goldstone, 2013;
Carvalho & Goldstone, 2104)
B. Positions and Honors
Positions and Employment
Assistant Professor, Indiana University, 1991-1996
Associate Professor, Indiana University, 1996-1998
Full Professor, Indiana University, 1999-2016
Director of the Indiana University Cognitive Science Program, 2006-2010
Other Experience and Professional Memberships
Member of the Board on Behavioral, Cognitive, and Sensory Sciences, National Research Council, National Academy of
Sciences. 2014
Member of National Research Council Committee for “How People Learn II: The Science and Practice of Learning.” 20152016.
American Psychological Association’s editor search committee for Psychological Review, 2013-2014
Founding Chair of the Glushko Prize for Outstanding Doctoral Dissertations in Cognitive Science, 2010-2013
Rumelhart Prize Selection Committee, 2007-2011 (Chair: 2009-2011)
43
Associate Editor of Cognitive Psychology, 2007-2016
Executive Editor of Cognitive Science, 2001-2005; Board of Reviewers, 2006-2016
Senior Editor of Topics in Cognitive Science, 2007-2016
Scientific advisor for the Network for Sensory Research (Mohan Matthen, PI), 2011-2016
Member of Department of Education, Institute of Education Sciences, Postdoctoral Research Training Proposals grant
review panel, 2010
Member of Department of Education, Institute of Education Sciences, Basic Cognitive Processes, permanent grant
review panel (review panel meetings twice per year), 2009-2011
Member of Advisory Board of GlassLabs Games, SRI, 2013-2014.
Member of Advisory Board for Fostering Interdisciplinary Research on Education, NSF grant, Purdue University, 20102012
Member of Advisory Board on the PhET Middle School DRK12 project, Department of Education, University of Colorado,
2010-2014
Advisory board member for UCSD’s NSF Science of Learning Center on “Temporal Dynamics of Learning,” 2006-2010
Advisory board member for the Pittsburgh NSF Science of Learner Center, 2007-2014 (Chair: 2009-2014)
Advisory Board for the Institute for Intelligent Systems, University of Memphis, 2015-2016
Advisory Board for “Investigating Motivation and Transfer in Physical Science through Preparation for Future Learning
Instruction,” National Science Foundation CORE grant (PI: Timothy Nokes Malach), 2016.
Advisory Board for “The cognitive science of explanation,” National Science Foundation Career grant (PI:
Tania Lombrozo), 2011-2015.
Advisory Board for the Generalized Intelligent Framework for Tutoring (GIFT), University of Memphis and Army
Research Laboratory, 2014-2016
Review panel, National Science Foundation, Directorate of Education and Human Resources; Division of Research,
Evaluation, and Communication; Research On Learning and Education (ROLE) 2002, Winter and Summer (chair)
Associate Editor for Psychonomic Bulletin and Review, 1998-2000
Board of consulting editors for Journal of Experimental Psychology: Learning, Memory, and Cognition, 1994-1998
Honors
Marquis Award for Most Outstanding Dissertation in Psychology, University of Michigan,1991
American Psychological Association (APA) Division of Experimental Psychology 1995 Young Investigator Award in
Experimental Psychology: Learning, Memory, and Cognition.
American Psychological Association (APA) Division of Experimental Psychology 1995 Young Investigator Award in
Experimental Psychology: General.
Chase Memorial Award for Outstanding Young Researcher in Cognitive Science, 1996. Established by Carnegie Mellon
University.
James McKeen Cattell Sabbatical Award, 1997-1998
American Psychological Association (APA) Distinguished Scientific Award for Early Career Contribution to Psychology in
the area of Cognition and Human Learning, 2000
National Academy of Sciences Troland research award for “novel experimental analyses and elegant modeling that show
how perceptual learning dynamically adjusts dimensions and boundaries of categories and concepts in human
thought”, 2004
Elected Fellow of the Society of Experimental Psychologists, 2004
Elected Fellow of the Cognitive Science Society, 2006
Elected Fellow of the Association for Psychological Science, 2007
C. Selected Peer-reviewed Publications (selected from 267 publications)
Braithwaite, D. W., Goldstone, R. L., van der Maas, H. L . J., & Landy, D. H. (2016). Informal mechanisms in
mathematical cognitive development: The case of arithmetic. Cognition, 149, 40-55.
Braithwaite, D. W., & Goldstone, R. L., (2015). Effects of variation and prior knowledge on abstract concept learning.
Cognition and Instruction, 33, 226-256.
Goldstone, R. L., Börner, K., & Pestilli, F. (2015). Brain self-portraits: the cognitive science of visualizing neural structure
and function. Trends in Cognitive Sciences, 19, 462-474.
Goldstone, R. L., de Leeuw, J. R., & Landy, D. H. (2015). Fitting Perception in and to Cognition. Cognition, 135, 24-29.
Fyfe, E. R., McNeil, N. M., Son, J. Y., & Goldstone, R. L. (2014). Concreteness fading in mathematic and science
instruction: A systematic review. Educational Psychology Review, 26, 9-25.
Needham, A., Goldstone, R. L., & Wiesen, S. (2014). Learning Visual Units After Brief Visual Experience in 10-month-old
Infants. Cognitive Science, 38, 1507-1519.
44
Braithwaite, D. W., & Goldstone, R. L. (2013). Integrating formal and grounded representations in combinatorics learning.
Journal of Educational Psychology, 105, 666-682.
Goldstone, R. L., Wisdom, T. N., Roberts, M. E., & Frey, S. (2013). Learning along with others. Psychology of Learning
and Motivation, 58, 1-45.
Wisdom, T. N., Song, X., & Goldstone, R. L. (2013). Social Learning Strategies in a Networked Group. Cognitive Science,
37, 1383-1425.
Day, S. B., & Goldstone, R. L. (2012). The import of knowledge export: Connecting findings and theories of transfer of
learning. Educational Psychologist, 47, 153-176.
Day, S. B., & Goldstone, R. L. (2011). Analogical transfer from a simulated physical system. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 37, 551-567.
Son, J. Y., Smith, L. B., & Goldstone, R. L. (2011). Connecting instances to promote children's relational reasoning.
Journal of Experimental Child Psychology, 108, 260-277.
Goldstone, R. L., & Landy, D. H. (2010). Domain-creating constraints. Cognitive Science, 34, 1357-1377.
Goldstone, R. L., Landy, D. H., & Son, J. Y. (2010). The education of perception. Topics in Cognitive Science, 2, 265284.
Goldstone, R. L., & Wilensky, U. (2008). Promoting transfer by grounding complex systems principles. The Journal of the
Learning Sciences, 17, 465-516.
Son, J. Y., Smith, L. B., & Goldstone, R. L. (2008). Short-cutting abstraction in children’s object categorizations.
Cognition, 108, 626-638.
Landy, D., & Goldstone, R. L. (2007). How abstract is symbolic thought? Journal of Experimental Psychology: Learning,
Memory, & Cognition, 33, 720-733. [Winner of the 2008 American Psychological Association Division of
Experimental Psychology Young Investigator Award for article in Learning, Memory and Cognition]
D. Research Support
"Teaching the visual structure of algebra through dynamic interactions with notation" (R305A1100060), Department of
Education, Institute of Education Sciences, (PI: David Landy, University of Richmond; Indiana University sub-contract PI:
Robert Goldstone), $1,092,484 ($683,492 to Indiana University), May 2011-April 2015.
“Towards Effectively Quantifying Programming Language Abstraction,” Indiana University Collaborative Research Grant
(PIs: Goldstone and Andrew Lumsdaine), $48,556, April 2011-March 2012.
“The dynamics of brain-body-environment interaction in behavior and cognition,” National Science Foundation,
Interdisciplinary Graduate Education Research Training (IGERT 0903495 ) (PI: Randall Beer), $3,124,368, August 2009July 2014.
“Transfer of perceptually grounded principles,” National Science Foundation, Research and Evaluation on Education in
Science and Engineering (DRL-0910218), (PI: Goldstone), $1,073,000, September 2009-August 2014.
45
Program Director/Principal Investigator James, Karin H.
BIOGRAPHICAL SKETCH
Provide the following information for the Senior/key personnel and other significant contributors in the order listed on Form Page 2.
Follow this format for each person. DO NOT EXCEED FOUR PAGES.
NAME
POSITION TITLE
James, Karin Harman
Associate Professor of Psychology
eRA COMMONS USER NAME (credential, e.g., agency login)
khjames
EDUCATION/TRAINING (Begin with baccalaureate or other initial professional education, such as nursing, include postdoctoral training and
residency training if applicable.)
DEGREE
INSTITUTION AND LOCATION
MM/YY
FIELD OF STUDY
(if applicable)
University of Toronto
B.S.
05/96
Psychology
Ph.D.
05/01
Experimental
Psychology
Postdoctoral
08/04
Experimental
Psychology
University of Western Ontario
Vanderbilt University
A. Personal Statement.
A significant proportion of my research program centers upon how learning through manual interaction with the
world creates and changes cognitive development. Since 2008, I have been specifically investigating how
handwriting experience changes both behavior and brain systems of pre-literate children. As such, I have
significant experience investigating both reading and handwriting processes in young children and in adult
populations. I also have expertise regarding recruitment, experimental protocol, design, data analyses, and
data interpretation of scientific studies on developing populations. My research program uses cross-sectional,
micro-genetic and longitudinal designs to research questions in children aged 18 months to 18 years. My
research has been funded by both the NIH and NSF as well as internal grants. My federally funded work is
collaborative, both internally and externally, providing me with experience with working within a research team.
Given my experience, my contribution to the current proposal is significant and I will be the primary investigator
on these research studies. The following publications are a sampling of this work (role: senior author).
a. James, K.H. & Gauthier, I. (2009). When writing impairs reading: Letter perception’s susceptibility to motor
interference. Journal of Experimental Psychology: General, 138, 416-431
b. James, K.H. (2010) Sensori-motor experience leads to changes in visual processing in the developing
brain. Developmental Science,13 (2), 279-288.
c. James, K.H. & Engelhardt, (2012). The effects of handwriting experience on functional brain development
in pre-literate children. Trends in Neuroscience and Education, 1, 32-42.
d. Li, J X & James, K.H. (2016)*. Symbol learning is facilitated by the visual variability produced by
handwriting. Journal of Experimental Psychology: General.
* American Psychological Association Spotlight paper, May, 2016
e. Vinci-Booher, S. James, T. & James, K.H. (in press). Functional connectivity among visual and motor
regions is enhanced with handwriting, but not typing, experience in the pre-literate child. Trends in
Neuroscience and Education.
PHS 398/2590 (Rev. 06/09)
46
Page
Biographical Sketch Format Page
Program Director/Principal Investigator James, Karin H.
For a list of all publications, see: http://www.ncbi.nlm.nih.gov/sites/myncbi/1LY4bkKzV-
05d/bibliography/48069806/public/?sort=date&direction=ascending
B. Positions and Honors
Positions and Employment
2004-2007
Research Scientist, Indiana University, Psychological and Brain Sciences
2007-2013
Assistant Professor, Indiana University, Psychological and Brain Sciences
2013-present Associate Professor, Indiana University, Psychological and Brain Sciences
Other Experience and Professional Memberships
1998-2000
Canadian Society for Brain, Behaviour and Cognitive Science
1998-2000
Association for Research in Vision and Ophthalmology
2000Vision Sciences Society
2004Cognitive Neurosciences Society
2007Society for Research in Child Development
2008Cognitive Development Society
Ad Hoc Reviewer: Vision Research, Neuropsychologia, Perception, NeuroImage, Journal of Vision, Cortex,
Journal of Neurophysiology, JEP: General, JEP:HPP, Brain & Cognition, Journal of Social Cognition, Cerebral
Cortex, Journal of Learning Disabilities, Journal of Cognitive Neuroscience, Developmental Science, Child
Development, Cognitive Development, Journal of Social and Affective Neuroscience, Brain and Language,
Psychological Science.
C. Contributions to Science
1. How handwriting experience affects literacy skills. Illiteracy is a major problem in the United States, with
only 60% of children reaching basic literacy skills by 4th grade. Even though the highest predictor of literacy in
fourth grade is letter knowledge in preschool, few attempts have been made to systematically investigate how
to increase letter knowledge skills in preschool. In 2010 I found that by training preschool children to print
letters by hand, when asked to subsequently perceive letters, networks were activated in the brain that are
used for reading in the literate adult. These networks were not active after children typed letters or learned
letters through other forms of practice such as tracing or visual only study. These findings were replicated and
extended to show that only self-generated handwriting (not watching others) activated this network after
training. Through this research program, I was also the first investigator world-wide to acquire functional MRI
data form 4 year-old children. This research has resulted in numerous invited talks, invited book chapters,
media (Wall Street Journal, New York Times etc), and general interest from school boards and educators
country-wide. My role in this research is as the principle investigator. I design and execute the experiments,
analyze the data, interpret and disseminate the findings. The results of this work have the potential to change
the way we teach children about letters, and can have a significant impact on pre-literacy skills and on
educational curriculum.
a. James, K.H. & Gauthier, I. (2009). When writing impairs reading: Letter perception’s susceptibility to
motor interference. Journal of Experimental Psychology: General, 138, 416-431
b. James, K.H. (2010) Sensori-motor experience leads to changes in visual processing in the developing
brain. Developmental Science,13 (2), 279-288.
c. James, K.H. & Engelhardt, (2012). The effects of handwriting experience on functional brain
development in pre-literate children. Trends in Neuroscience and Education, 1, 32-42.
d. Kersey, A.J. & James, K.H. (2013). Brain activation patterns resulting from learning letter forms
through active self-production and passive observation in young children. Frontiers in Cognitive
Science. September, 23 doi: 10.3389/fpsyg.2013.00567
PHS 398/2590 (Rev. 06/09)
47
Page
2
Biographical Sketch Format Page
Program Director/Principal Investigator James, Karin H.
2. The effects of active learning on subsequent cognitive processing. With a few notable exceptions (Gibson,
Piaget), the effects that our bodily actions have on how we perceive the world was not a common motivation
for understanding cognition until the recent popularity of the theory of embodied cognition. Prior to the spread
of the embodied movement, my research was motivated by the question of how our manual interactions with
objects shaped object recognition. I showed through various methodologies (human-computer interaction,
virtual reality environments, psychophysical thresholding, and functional neuroimaging) that if we manipulate
objects through self-generated action, we are better able to recognize those objects upon subsequent visual
perception than if we passively watch the same object movement. Thus, self-generated action is key for
enhancing learning. Further, I investigated the mechanisms that underlie these effects in adults and children,
using fMRI. I continue to research this theme in adults and throughout development – the idea is pervasive
thread that winds throughout my research projects. In adult populations, we have now shown the ubiquity of
the benefits of active learning: It happens through computer-controlled action, manual action, can affect visual,
auditory and multisensory processing. I have served as the primary investigator on all of these studies, and
have trained 3 graduate students through this work. Four of approximately 12 publications on this topic are
listed below.
a. Harman, K.L., Humphrey, G.K. & Goodale, M.A. (1999). Active manual control of object views facilitates
visual recognition. Current Biology, 9 (22), 1315-1318.
b. James, K.H., & Swain, S.N. (2011). Only self-generated actions create sensori-motor systems in the
developing brain. Developmental Science, 14 (4), 673-687. NIHMS 629884.
c. Butler, A.J., James, T.W. & James, K.H. (2011) Enhanced multisensory integration and motor reactivation
after active motor learning of audiovisual associations. Journal of Cognitive Neuroscience, 23 (11), 3515-3528.
d. Butler, A.J., & James, K.H. (2013). Active learning of novel sound-producing objects: Motor reactivation
and enhancement of visuo-motor connectivity. Journal of Cognitive Neuroscience, 25, 203-218.
3. The development of vision for action in the toddler. Developmental psychologists have seldom considered
the effects of how infants and toddlers hold, turn, and manipulate objects on how they understand object
structure for visual processing. Through a five-year, NIH R01 grant, on which I was P.I., we studied the effects
that toddlers’ actions had on visual competencies. We found that toddlers were able to perform visually guided
actions, before they could make perceptual judgments, indicating the relative maturity of the action system that
drives visual perception. We further found that toddlers’ self-generated manipulation of objects developed from
an immature, somewhat random pattern, to a well organized, adult pattern between 18 and 24 months of age.
These manipulations were biased towards perceiving particular views of objects and were correlated with
higher object recognition abilities, independent of age. This research contributes to the general idea that
actions on objects change visual processing and that self-generated actions create the information upon which
we develop mature object representations.
a. Periera, A.F., James, K.H., Jones, S.S. & Smith, L.B. (2010). Early biases and developmental changes in
self-generated object views. Journal of Vision, 10(11): 22, 1-13.NIHMS 273211.
b. Street, S., James, K., Jones, S. & Smith, L. B. (2010) Vision for Action in Toddlers: The Posting Task. Child
Development, 82 (6), 2083-2094. NIHMS 321290.
c. Smith, LB., Street, S., Jones, S.S., & James, K.H. (2014) Using the axis of elongation to align shapes:
Developmental changes between 18 and 24 months of age. Journal of Experimental Child Psychology, 123,
15-35. NIHMS 577438.
d. James, K.H., Jones, S.S., Swain, S., Pereira, A., & Smith, L.B. (2014) Some views are better than others:
Evidence for a visual bias in object views self-generated by toddlers. Developmental Science, 17(3), 338-351.
NIHMS 518867.
4. The embodiment of language processing. Traditionally, language has been thought to recruit a specific
temporal-frontal processing network in the brain that is described as being amodal, or independent of
sensorimotor systems. However this changed when researchers showed that action words were processed in
motor systems that were effector-specific. I was interested to see whether or not this motor activation was
experience dependent, that is, did it exist early in development prior to an extensive action repertoire, and
would it change with training. First we documented that 5 year-old children also recruited the motor system
PHS 398/2590 (Rev. 06/09)
48
Page
3
Biographical Sketch Format Page
Program Director/Principal Investigator James, Karin H.
when hearing verbs, similar to the adult brain. Next, we showed that this activation was only created when
children learned verbs through their own self-generated actions, not by watching a model. We have extended
these findings to show that even novel sounds that are not from the known language can recruit these same
sensorimotor systems, but only if the child learns them through self-generated actions. These projects
demonstrate that associative learning – co-occurance of action and perception in time - change brain
processing – not only on a cellular level, but on a macro- level as well, and have pervasive effects on neural
systems.
a. James, K.H & Mauoene, J. (2010). Verb processing in the developing brain. Developmental Science, 12 (6),
F26-F34.
b. James, K.H., & Swain, S.N. (2011). Only self-generated actions create sensori-motor systems in the
developing brain. Developmental Science, 14 (4), 673-687. NIHMS 629884.
c. James, K.H. & Bose, P. (2011). Self-generated actions during learning objects and sounds create sensrimotor systems in the developing brain. Cognition, Brain & Behavior, 15 (4), 485-503. NIHMS 629882.
d. Butler, A.J., & James, K.H. (2013). Active learning of novel sound-producing objects: Motor reactivation and
enhancement of visuo-motor connectivity. Journal of Cognitive Neuroscience, 25, 203-218.
C. Research Support
Current and Pending Research Support
External:
National Science Foundation (NSF). 064707-00002B (2014-2018)
The role of gesture in word learning: Collaborative Research.
Co-PIs: James, K.H., & Goldin-Meadow, S. (University of Chicago)
Goals: To investigate the relationship between transitive action and representational gesture processing in the
neural systems of young children. Responsibilities include all functional neuroimaging design, implementation,
data analyses, interpretation and dissemination of findings.
National Institutes of Health P50 Center grant #HD071764 (2012-2017)
“Defining and Treating Specific Written Language Learning Disabilities”
Goals: A multidisciplinary, long term project centered upon the definition and intervention of reading and writing
disabilities. My specific role is as external advisor on functional neuroimaging portions of the project, mostly
surrounding pediatric neuroimaging.
National Institutes of Health NIH/NICHHD 5T32HD007475 (2005 –2015)
NIH Training Grant “Integrative study of developmental process”
Role: Co-PI (P.I. Linda Smith)
Goals: A multi-investigator, multi-disciplinary project, in its 15th year, that strives to define developmental
process across numerous domains. The focus is on training developmental scientists in theory, research
methods and professional development. My role is as organizer/instructor of weekly seminar and a supervisory
role for students and post-doctoral fellows interested in functional neuroimaging in pediatric populations. Clinical and Translational Science Institute Core Facility Grant
Modern diffusion-weighted MRI protocol and analyses for early profiling and detection of reading disabilities in
preschool children.
Role: Co P.I (with Franco Pestilli)
Goals: to track the development of both functional and structural brain changes as children learn to read form
ages 4-7. Role: Design, implementation, data analyses and dissemination of results.
Internal:
Faculty Research Support Program
Interactions of sensory and motor processes in the brain
PHS 398/2590 (Rev. 06/09)
49
Page
4
Biographical Sketch Format Page
Program Director/Principal Investigator James, Karin H.
Goals: to understand the interaction among action and visual processing using a well-known psychophysical adaptation
paradigm.
Role: Collaborator. P.I. Dr. Hannah Block
Indiana University Imaging Research Facility Pilot Program
The development of neural systems used for handwriting
Goals: This project tracks neural change in preschool children as they learn to write by hand.
Role: Principle investigator
Neuroimaging studies of the effects of writing on early mathematical understanding
Goals: To understand how different writing styles of 3-digit numbers affects neural processing of letters and
numbers.
Role: Principle investigator
Effects of active learning on word meaning
Goals: To investigate neural processing of words based on several types of semantic categorizations.
Role: Principle investigator
PHS 398/2590 (Rev. 06/09)
50
Page
5
Biographical Sketch Format Page
Sriraam Natarajan, Ph.D.
Professional Preparation
• B.E in Computer Science, University of Madras 2001. Honors: First class with distinction
• M.S in Computer Science, Oregon State University, 2004: ”Multi-Criteria Average Reward Reinforcement Learning” (Advisor: Prasad Tadepalli)
• Ph.D. in Computer Science, Oregon State University, 2007: ”Effective Decision-Theoretic Assistance
Through Relational Hierarchical Models” (Advisor: Prasad Tadepalli)
Appointments
• Associate Professor, School of Computing and Informatics, Indiana University, Bloomington, IN [July
2016- Present].
• Assistant Professor, School of Computing and Informatics, Indiana University, Bloomington, IN [August
2013- June 2016].
• Visiting Assistant Professor, School of Computing and Informatics, Indiana University, Bloomington,
IN [June 2013-July 2013].
• Assistant Professor, Wake Forest University School of Medicine, Winston-Salem, NC: Translational
Science Institute [Dec 2010-May 2013 ].
• Assistant Professor, Wake Forest-Virginia Tech School of BioMedical Engineering and Sciences, Winston Salem, NC [Dec 2010-May 2013].
• Adjunct Assistant Professor, Department of Compute Science, Wake Forest University, Winston Salem,
NC [June 2011-].
• Visiting Assistant Professor, Department of Compute Science, University of North Carolina, Charlotte,
NC [June 2011-May 2013].
• Post-Doctoral Research Associate, University of Wisconsin-Madison, Wisconsin: Department of Biostatistics and Medical Informatics [Jan 2008-Nov 2010].
Related Publications
1. Phillip Odom and Sriraam Natarajan, Active Advice Seeking for Inverse Reinforcement Learning, International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2016.
2. Phillip Odom and Sriraam Natarajan, Actively Interacting with Experts: A Probabilistic Logic Approach, European Conference on Machine Learning (ECML-PKDD) 2016.
3. Phillip Odom, Tushar Khot, Reid Porter, and Sriraam Natarajan, Knowledge-Based Probabilistic Logic
Learning, Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), 2015.
4. Tushar Khot, Sriraam Natarajan and Jude Shavlik, Relational One-Class Classification: A NonParametric Approach, Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 2014.
5. Sriraam Natarajan, Tushar Khot, Kristian Kersting, Bernd Gutmann and Jude Shavlik, Gradient-based
Boosting for Statistical Relational Learning: The Relational Dependency Network Case, special issue of
Machine Learning Journal (MLJ), Volume 86, Number 1, 25-56, 2012.
51
1
Other Significant Publications
1. Tushar Khot, Sriraam Natarajan, Kristian Kersting, Bernd Gutmann and Jude Shavlik, Gradientbased Boosting for Statistical Relational Learning: The Markov Logic Network and Missing Data Cases,
Machine Learning Journal 2014.
2. Alan Fern, Sriraam Natarajan, Kshitij Judah and Prasad Tadepalli, A Decision-Theoretic Model of
Assistance, Journal Of Artificial Intelligence Research (JAIR), 2014.
3. Jeremy Weiss, Sriraam Natarajan and David Page, Multiplicative Forests for Continuous-Time Processes, Neural Information Processing Systems (NIPS) 2012.
4. Sriraam Natarajan, Saket Joshi, Prasad Tadepalli, Kristian Kersting, and Jude Shavlik. Imitation
Learning in Relational Domains: A Functional-Gradient Boosting Approach , International Joint Conference in AI (IJCAI) 2011.
5. Sriraam Natarajan, Prasad Tadepalli and Alan Fern, A Relational Hierarchical Model of DecisionTheoretic Assistance Knowledge and Information Systems(KAIS) 2011.
Representative Synergistic Activities
• Editorial Board Member, Journal of AI Research (JAIR), Data Mining and Knowledge Discovery
(DMKD)
• Electronic Publishing Editor Journal of AI Research (JAIR)
• Co-chair, AAAI Student Activities Program 2016, 2017
• Co-chair, AAAI Student Abstracts 2014,2015
• Co-organizer, International Workshop on Statistical Relational Learning, 2012, International Workshop
on Statistical Relational AI, 2015, 2014, 2013, 2012, 2010, International Workshop on Collective Learning and Inference in Structured Domains, 2011, 2012, tutorial on Lifted Probabilistic Inference, IJCAI
’11
• Senior Program Committee Member, IJCAI ’15, ’16, ’13, ECML ’16 AISTATS ’15
• Program Committee Member, ICML ’16, ’15, ’13,’12,’11, ’10, ’09, ’08 IJCAI 11,’09, AAAI ’17, ’15,
’14,’13,’12,’11,’10, ’08, ECML ’14,’13,’12, UAI ’13 ILP ’13
• Reviewer of Machine Learning Journal, Journal of Machine Learning Research, Artificial Intelligence
Journal, Journal of AI research, ACM transactions, Journal of Biomedical Informatics, Knowledge and
Adaptive Systems Journal
Active Grants
• NIH R01 Machine Learning for Identifying Adverse Drug Events
$335, 000 (my share) Role: Co-PI. PI: David Page
07/01/15-06/20/20
• DARPA Communicating with Computers
08/1/15 - 07/31/20
$419, 002 (my share) Role: Co-PI. PI: Dan Roth Co-PI: Martha Palmer, Jana Doppa, Julia Hockenmaier.
• ARO Young Investigator Human-in-the-loop Statistical Relational Learners
$150, 000 Role: PI
9/1/13- 08/31/16
• NSF Intelligent Clinical Decision Support with Probabilistic and Temporal EHR Modeling
1/1/14
$680, 000 My share: $280,000. Role: PI
- 12/31/16
Co-PIs: Kris Hauser, Shaun Grannis.
• DARPA DEFT
$300, 000 (my share) Role: Co-PI. PI: Jude Shavlik Co-PI: Chris Re.
52
2
11/1/12 - 05/31/17
• Turvo Inc, research gift: Learning for logistics problems
$225,000. Role: PI
4/1/15- 03/31/18
• XEROX PARC Faculty Award: Learning from an Expert in Noisy, Structured domains: Adapting to
Healthcare Problems
1/15/15
$90, 000 Role: PI
- 12/31/17
• AFOSR SBIR: Enhanced Text Analytics Using Lifted Probabilistic Inference Algorithms
$280, 000 Role: Co-PI
• NIH Adverse Drug Events
$95, 000 Role: Sub-contract from University of Wisconsin. PI: David Page
10/1/13
- 10/30/16
10/1/11 - 10/1/14
Completed Grants
• DARPA Machine Reading
$128, 500 Role: Subcontract from SRI International. PI: David Israel.
• NIH Subsystem Modeling Using Dependency Networks
$10, 000 Role: Co-I. PI: Edward Ip
• WFU Science Research Fund
$8, 000 Role: PI
4/1/11 - 12/31/12
9/1/2012 - 8/31/2013
10/1/11 - 4/30/12
Pending Grants
• DARPA Human is more than a labeler: Curating Probabilistic Logic Models with Human Advice
10/01/16-09/30/20
$530, 714 Role: PI.
• IARPA Guiding Probabilistic Learning in Relational Domains with Crowd-Sourced STs
06/20/20
$800, 000 Role: PI.
09/01/16-
• ARO Human-Machine Collaboration in Relational Sequential Decision-Making Problems
09/30/17
$60, 000 Role: PI.
01/01/17-
Collaborators and Other Affiliations
• Collaborators: R. Balaraman (IIT Madras, India), H.H. Bui (SRI International), R. de Salvo Braz (SRI
International), A. Fern (Oregon State), K. Hauser (Duke University), K. Kersting (Technical University
Dortmund, Germany), D. Page (UWisc- Madison), R. Parr (Duke University), D. Poole (University of
British Columbia), J. Shavlik (UWisc-Madison)
• Graduate Advisor: P. Tadepalli (Oregon State)
53
3
OLAF SPORNS – CURRICULUM VITAE
BRIEF BIOGRAPHY
After receiving an undergraduate degree in biochemistry, Olaf Sporns earned
a PhD in Neuroscience at Rockefeller University and then conducted
postdoctoral work at The Neurosciences Institute in New York and San Diego.
Currently he is the Robert H. Shaffer Chair, a Distinguished Professor, and a
Provost Professor in the Department of Psychological and Brain Sciences at
Indiana University in Bloomington. He is co-director of the Indiana University
Network Science Institute and holds adjunct appointments in the School of
Informatics and Computing and the School of Medicine. His main research
area is theoretical and computational neuroscience, with a focus on complex
brain networks. In addition to over 200 peer-reviewed publications he is the
author of two books, “Networks of the Brain” and “Discovering the Human
Connectome”. He currently serves as the Founding Editor of “Network
Neuroscience”, a journal published by MIT Press. Sporns was awarded a
John Simon Guggenheim Memorial Fellowship in 2011 and was elected
Fellow of the American Association for the Advancement of Science in 2013.
CONTACT
Olaf Sporns, PhD
Department of Psychological and Brain Sciences
th
1101 East 10 Street
Indiana University
Bloomington, IN 47405
Office:
Lab:
PSY 360
PSY A308
Homepage:
http://www.indiana.edu/~cortex
812-855-2772 (office)
812-856-5986 (lab)
812-855-4691 (fax)
[email protected] (email)
Twitter: @spornslab
RESEARCH INTERESTS
Connectomics
Analysis of neuroanatomical connection patterns, relation of brain structure to functional
connectivity, complexity of neural dynamics, human neuroimaging and clinical disorders,
network evolution and growth, network damage and repair.
Computational Neuroscience
Dynamic models of brain networks, neural synchrony and binding, information-theoretical
measures of functional interactions, models of cognitive systems, neuroinformatics.
Cognition
Cognitive function in distributed networks, dynamics of functional connectivity in brain
imaging, embodied cognition, consciousness.
EDUCATION
1983-1986
1984-1986
1986
1986
1986-1990
54
Undergraduate studies in Biochemistry at Eberhard-Karls-Universität Tübingen, Germany.
Research Assistant at the Max-Planck-Institute for Developmental Biology, Tübingen.
Research on the role of cholinesterases in brain development in the laboratory of Dr. Paul G. Layer.
B.S. Biochemistry, Eberhard-Karls-Universität Tübingen, Germany.
Research Assistant, Shanghai Institute of Cell Biology, Chinese Academy of Sciences.
Graduate studies at Rockefeller University, New York, NY.
Research carried out in the Laboratory of Molecular and Developmental Biology and at The
Neurosciences Institute.
1990
Ph.D. Neuroscience, Rockefeller University, New York.
Dissertation: “Synthetic neural modeling: computer simulations of perceptual and motor systems”.
Research Advisor: Prof. Gerald M. Edelman.
HONORS AND AWARDS
2001
2002
2002
2004
2008
2010
Pew Scholars in Biomedical Sciences, Nominee for Indiana University.
Outstanding Paper Award, International Conference on Development and Learning ICDL 02, MIT.
Outstanding Junior Faculty Award, Indiana University Bloomington.
Trustees Teaching Award, Indiana University Bloomington.
Distinguished Faculty Award, College of Arts and Sciences, Indiana University Bloomington.
Honorable Mention for “Networks of the Brain” in the category “Biomedicine and Neuroscience”, The 2010
American Publishers Awards for Professional and Scholarly Excellence (PROSE).
NeuroImage “Editor’s Choice Award”, Methods and Modeling Section, shared with M. Rubinov
Provost Professorship, Indiana University
John Simon Guggenheim Memorial Fellowship
Honorable Mention for “Discovering the Human Connectome” in the category “Biomedicine and
Neuroscience”, The 2012 American Publishers Awards for Professional and Scholarly Excellence
(PROSE).
Fellow of the American Association for the Advancement of Science (elected)
Distinguished Professor, Indiana University
Robert H. Shaffer Endowed Chair
Trustees Teaching Award, Indiana University Bloomington
Thomson Reuters “Highly Cited Researcher” in Neuroscience/Behavior
Thomson Reuters: Listed as one of “The World’s Most Influential Scientific Minds” in
Neuroscience/Behavior
Distinguished Cognitive Scientist Award, UC Merced
Grossman Award, Society of Neurological Surgeons
2011
2011
2011
2012
2013
2014
2014
2015
2015
2015
2016
2016
MAJOR GRANTS (FUNDED)
Completed
1998-2001
2001-2003
2002-2005
2005-2010
2009-2011
2011-2013
2011-2014
2010-2015
2010-2015
“Machine Psychology: Modeling the Brain and Behavior through Real World Devices”, W.M. Keck
Foundation, Co-Investigator (PI: Gerald M. Edelman)
“Cortical Architectures for Pattern Recognition”, Department of Defense Contract NMA201-01-C-0034, PI
“Neuro-Robotic Models of Learning and Addiction”, NIH-NIDA R21 DA1564, PI
“Network Mechanisms Underlying Cognition and Recovery of Function in the Human Brain”, James S.
McDonnell Foundation, Co-Investigator (PI: Randy McIntosh)
“An Information-Theoretical Approach to Coordinated Behavior”, Air Force Office of Scientific Research,
Co-Investigator.
“Communities and Criticality in Brain Networks across Development and ADHD”, James S. McDonnell
Foundation, Co-investigator (PI: Steve Petersen)
“Brain Network Recovery II”, James S. McDonnell Foundation, Co-Investigator (PI: Randy McIntosh)
IGERT Training Grant “The Dynamics of Brain-Body-Environment Systems in Behavior and Cognition”,
National Science Foundation. Co-PI (PI: Randy Beer)
“Mapping the Human Connectome: Structure, Function, and Heritability”, NIH Blueprint Project, CoInvestigator (PIs: David Van Essen, Kamil Ugurbil)
Current
2012-2016
2014-2018
2015-2018
55
“Connectivity and Information Flow in a Complex Brain”, National Science Foundation, Co-Investigator
(PI: Ralph Greenspan)
“Testing network-based hubs through lesion analysis”, James S. McDonnell Foundation, Co-Investigator
(PI: Daniel Tranel)
“CRCNS: Linking Connectomics and Large-Scale Dynamics of the Human Brain”, National Institutes of
Health, NCCIH, PI.
PUBLICATIONS
Books
6. Leergaard TB, Sporns O, Hilgetag C (2013) Mapping the Connectome: Multi-Level Analysis of Brain
Connectivity. Frontiers Research Topics, Lausanne, Switzerland.
5. Sporns O (2012) Discovering the Human Connectome. MIT Press, Cambridge.
4. Sporns O (2011) Networks of the Brain. MIT Press, Cambridge.
3. Sendhoff B, Körner E, Sporns O, Ritter H, Doya K, eds. (2009) Creating Brain-Like Intelligence.
Springer: Berlin.
2. Reeke, G.N., Poznanski, R.R., Lindsay, K.A., Rosenberg, J.R. and Sporns, O. (2005) Modeling in the
Neurosciences. From Biological Systems to Neuromimetic Robotics, 2nd edition, CRC Press, London.
1. Sporns, O., and Tononi, G., Eds. (1994) Selectionism and the Brain. Academic Press: San Diego.
Papers (from most recent)
202. Rosenthal G, Sporns O, Avidan G (2016) Stimulus dependent dynamic reorganization of the human
face processing network. Cerebral Cortex (in press).
201. Swanson LW, Sporns O, Hahn JD (2016) Network architecture of the cerebral nuclei (basal ganglia)
association and commissural connectome. Proc Natl Acad Sci USA (in press)
200. Cary RP, Ray S, Grayson DS, Painter J, Carpenter S, Maron L, Sporns O, Stevens AA, Nigg JT, Fair
DA (2016) Network structure among brain systems in ADHD is uniquely modified by stimulant
administration. Cerebral Cortex doi: 10.1093/cercor/bhw209.
199. Avena-Koenigsberger A, Misic B, Hawkins RXD, Griffa A, Hagmann P, Goni J, Sporns O (2016)
Path ensembles and a tradeoff between communication efficiency and resilience in the human
connectome. Brain Struct Func doi: 10.1007/s00429-016-1238-5
198. Mišić B, Betzel RF, de Reus MA, van den Heuvel MP, Berman MG, McIntosh AR, Sporns O (2016)
Network-level structure-function relationships in human neocortex. Cerebral Cortex doi:
10.1093/cercor/bhw089.
197. van den Heuvel MP, Bullmore ET, Sporns O (2016) Comparative connectomics. Trends Cogn Sci
20, 345-361.
196. Sporns O (2016) Connectome Networks: From Cells to Systems. In: Micro-, Meso- and MacroConnectomics of the Brain, H. Kennedy et al. (eds), pg 107-127, Springer.
195. De Pasquale F, Della Penna S, Sporns O, Romani GL, Corbetta M (2015) A dynamic core network
and global efficiency in the resting human brain. Cerebral Cortex, doi: 10.1093/cercor/bhv185.
194. Nigam S, Shimono M, Ito S, Yeh FC, Timme N, Myroshnychenko M, Lapish C, Tosi Z, Hottowy P,
Smith WC, Masmanidis S, Litke A, Sporns O, Beggs J (2016) A small proportion of neurons dominates
information transfer in local cortical networks. J Neurosci 36, 670-684.
193. Betzel RF, Fukushima M, He Y, Zuo XN, Sporns O (2016) Dynamic fluctuations coincide with
periods of high and low modularity in resting-state functional brain networks. Neuroimage 127, 287-297.
192. Sporns O, Betzel RF (2016) Modular brain networks. Annu Rev Psychol 67, 613-640.
191. Kim DJ, Davis EP, Sandman CA, Sporns O, O’Donnell BF, Buss C, Hetrick WP (2016) Children’s
intellectual ability is associated with structural network integrity. Neuroimage 124, 550-556.
56
David H. Landy
Professional Preparation
Alma College
Physics, Mathematics & CS
Alma, MI
B.S/B.A.1999
Indiana University
Cognitive Science/CS (joint degree) Bloomington, IN
M.S. 2007
UIUC
Cognitive Science (Postdoctoral)
Urbana-Champaign, IL
2009
Appointments
2013-Present Current Assistant Professor, Cognitive Science Department, Indiana University
2009-2013
Assistant Professor, Psychology/Cognitive Science Department, University of
Richmond, Richmond, VA
2007-2009
Assistant Professor, Cognitive Science Department, Indiana University,
Bloomington, IN
Products
Products Most Closely Related
Ottmar, E., Landy, D. & Goldstone, R. L. (2012). Teaching the perceptual structure of algebraic
expressions: Preliminary findings from the Pushing Symbols intervention. In N. Miyake, D.
Peebles, & R. P. Cooper (Eds.) Proceedings of the 34th Annual Conference of the Cognitive
Science Society (pp. 2156-2161). Austin, TX: Cognitive Science Society.
Landy, D. & Goldstone, R. L. (2010). Proximity and precedence in arithmetic. Quarterly
Journal of Experimental Psychology, 63(10), 1953-1968.
Landy, D. & Goldstone, R. L. (2009). How much of symbolic manipulation is just symbol
pushing? In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of
the Cognitive Science Society (pp. 1072-1077). Austin: TX: Cognitive Science Society.
Landy, D., & Goldstone, R. L. (2007). How abstract is symbolic thought? Journal of
Experimental Psychology: Learning, Memory, and Cognition, 33(4), 720-733.
Landy, D., & Goldstone, R. L. (2007). Formal notations are diagrams: Evidence from a
production task. Memory and Cognition 35(8), 2033-2040.
Other Significant Products
Guay, B., Chandler, C., Erkulwater, J., & Landy, D. (in press). Testing the Effectiveness of a
Number-based Classroom Exercise. PS: Political Science & Politics Advanced online
publication.
Landy, D., Charlesworth, A., & Ottmar, E. (in press). Categories of Large Numbers in Line
Estimation. Cognitive Science.
Landy, D., Brookes, D., Smout, R (2014). Abstract Numeric Relations and the Visual Structure
of Algebra. Journal of Experimental Psychology: Learning, Memory, and Cognition.
1
57
Landy, D., Silbert, N. & Goldin, A. (2013). Estimating large numbers. Cognitive
Science, 37(5),775-799.
Goldstone, R. L., Landy, D., & Son, J. Y. (2010). The education of perception, Topics in
Cognitive Science, 2(2), 265-284.
Synergistic activities
Professional Service: Associate editor of Cognitive Science Journal (2015-present); Program
Committee Member, Cognitive Science (2010-2015); Ad hoc reviewing for Cognition, Journal
of Experimental Psychology: Learning, Memory, & Cognition; Cognitive Psychology; Quarterly
Journal of Experimental Psychology; Developmental Psychology; Cognitive Systems Research;
Topics in Cognitive Science
Public Outreach: Developer of two middle-school algebra design tools, Algebra Touch (Berry
Software) and Graspable Math (Graspable, Inc), available for iOS and Chrome browsers, and
used by over 400,000 learners.
Teaching: Developed courses in Cognitive Psychology, Cognitive Science, and the role of
Cognition in Education, focusing on the teaching of mathematical and numerical reasoning.
Collaborators & other affiliations
Collaborators and Co-Editors:
Robert Goldstone (Indiana University), Erin Ottmar (Worcester Polytechnic Institute), Timothy
Salthouse (University of Virginia), Colin Allen (Indiana University), Michael Anderson
(Franklin & Marshall College), David Brookes (Florida International University), Lionel Brunel
(Universite Paul-Valery Montpelier III), Arthur Charlesworth (University of Richmond) Zach
Davis (University of Richmond), L. Elizabeth Crawford (University of Richmond), Aleah Goldin
(N/A), Brian Guay (Duke University), Taylyn Hulse (University of Richmond), Jessica Lesky
(Indiana University), Jaclyn Pierce (University of Richmond), Brad Rogers, (Indiana University),
Noah Silbert (University of Cincinnati), Ryan Smout (N/A), Erik Weitnauer (Indiana University)
Carlos Zednick (University of Osnabruck) Total = 23
Graduate and Post-Doctoral Advisors:
Michael Gasser (Indiana University), Robert L. Goldstone (Indiana University) Total = 2
Thesis Advisor/Sponsor (Past 5 years):
Brad Rogers (Indiana University Graduate Advisor), Caroline Williams (University of
Wisconsin, Madison, External Doctoral Advisor), Erin Ottmar (University of Richmond, Postdoctoral sponsor), Erik Weitnauer (Indiana University, Post-doctoral sponsor), Tyler Marghetis
(Indiana University, Post-doctoral sponsor). Total = 4
2
58
Current and Pending Support – David Landy
Investigator: David Landy
Support:
Current
Pending
Submission Planned in Near Future
Project/Proposal Title: The Efficacy of From Here to There: A Dynamic Technology for
Algebraic
IU Faculty
Research
Support
ImprovingUnderstanding,
Algebraic Understanding
- Cognition
and
StudentProgram.
Learning
Source of Support: Indiana University
Total Award Amount: $47,000
Total Award Period Covered: 05/01/2016 to 05/31/2017
Location of Project: Bloomington, IN
Person-Months Per Year Committed to the Project:
Cal: 0.5
Acad:
Sumr: 0
0.5
Investigator:
David Landy
Support:
Current
Pending
Submission Planned in Near Future
Project/Proposal Title: Graspable Math: A Dynamic Notation for Classrooms
Source of Support: Indiana University
Total Award Amount: $30,000
Total Award Period Covered: 06/01/2016 to 05/31/2017
Location of Project: Berkeley, CA and Bloomington, IN
Person-Months Per Year Committed to the Project:
Cal: 0
Acad: 0
Sumr: 0
Investigator:
David Landy
Support:
Current
Pending
Submission Planned in Near Future
Project/Proposal Title: Harnessing Human Perception and Gesture to Develop Sharable
Algebra
Expertise in Algebra
Source of Support: IUCRG
Total Award Amount: $75,000
Total Award Period Covered: 05/01/2016 to 04/30/2017
Location of Project: Bloomington, IN
Person-Months Per Year Committed to the Project:
Cal: 1
Acad: 1
Sumr:
Investigator:
David Landy
Support:
Current
Pending
Submission Planned in Near Future
Project/Proposal Title: The Efficacy of From Here to There!: A Dynamic Technology for
Improving Algebraic Understanding
Source of Support: Worcester Polytechnic Institute, Subaward from IES Goal 3
Total Award Amount:
Total Award Period Covered: 07/01/2017 to 06/15/2020
$728,772 of Project: Worcester, MA and Bloomington, IN
Location
Person-Months Per Year Committed to the Project:
Cal: 2
Acad: 1
Sumr: 1
3
59
Investigator:
David Landy
Support:
Current
Pending
Submission Planned in Near Future
Project/Proposal Title: Supporting Algebraic Literacy Through Distributed Dynamic
Notations
Source of Support: IES CASL, Goal 2
Total Award Amount: $1.4M
Total Award Period Covered: 07/01/2017 to 04/30/2020
Location of Project: Worcester, MA and Bloomington, IN
Person-Months Per Year Committed to the Project:
Cal: 2
Acad: 0
Sumr: 2
4
60
Biographical Sketch: Michael N. Jones, Ph.D.
1. Professional Preparation
Institution
Nipissing University
Sun Microsystems
Queen’s University
University of Colorado, Boulder
Field of Study
Psychology
Scientific Computing
Psychology
Cognitive Science
Degree/Years
BA, 1995-1999
Levels 1 and 2 (2001/2004)
MA, PhD, 1999-2005
Postdoc, 2005-2006
2. Appointments
2015-Pres:
William and Katherine Estes Endowed Chair of Cognitive Modeling
2013-Pres:
Associate Professor of Psychology and Cognitive Science; Indiana University
Adjunct Associate Professor of Informatics and Computing; Indiana University
2006-2013:
Assistant Professor of Psychology and Cognitive Science; Indiana University
2005-2006:
NSERC Postdoctoral Research Fellow at the Institute of Cognitive Science,
University of Colorado at Boulder (Advisors: Tom Landauer and Walter Kintsch)
3. Research Foci
Computational models of memory and language; Big data science approaches to cognitive science; Knowledgebased intelligent systems; Computational synthesis of neuroimaging data; Statistical methodology for analyzing
large-scale data.
4. Selected Honors and Related Experience
National Science Foundation CAREER Award
Psychonomic Society Outstanding Early Career Award
Federation of Associations in Behavioral and Brain Sciences Early Career Investigator Award
Google Faculty Research Award
Indiana University Outstanding Junior Faculty Award
NSERC Julie Payette Research Scholarship
5. Selected Leadership Positions
Editor-in-Chief, Behavior Research Methods (2014-2018)
Associate Editor, Journal of Experimental Psychology: General (2012-2014)
President, Society for Computers in Psychology (2011-2014)
Secretary-Treasure, Society for Computers in Psychology (2008-2011)
Chair, Psychonomic Society Digital Content Editor Search Committee (2014)
Editor, Big Data in Cognitive Science. Psychology Press (Taylor & Francis)
6. Selected Publications (selected from 70 peer-reviewed publications)
* I am bold, graduate students/postdocs working under my supervision are italicized.
Jones, M. N., & Dye, M. W. (2016). Big data methods for discourse analysis. In Schober, M. F., Rapp, D. N., &
Britt, M. A. (Eds.) Handbook of discourse processes, 2nd Edition
Jones, M. N. (2016). Mining large-scale naturalistic data to inform cognitive theory. In M. N. Jones (Ed.), Big Data
in Cognitive Science. New York: Taylor & Francis
Rubin, T., Koyejo, O., Jones, M. N., & Yarkoni, T. (2016). Generalized correspondence LDA-models (GCLDA) for identifying functional regions in the brain. Advances in Neural Information Processing
Systems.
Gruenenfelder, T. M., Recchia, G., Rubin, T., & Jones, M. N. (2015). Graph-theoretic properties of networks based
on word association norms: Implications for models of lexical semantic memory. Cognitive Science.
61
M. N. Jones
2
Jones, M. N., Willits, J., & Dennis, S. (2015). Models of semantic memory. In J. R. Busemeyer & J. T.
Townsend (Eds.) Oxford Handbook of Mathematical and Computational Psychology.
Riordan, B., Dye, M., & Jones, M. N. (2015). Grammatical number processing and eye movements in English
spoken language comprehension. Frontiers in Psychology, 6, 590.
McRae, K., & Jones, M. N. (2013). Semantic memory. In D. Reisberg (Ed.) The Oxford Handbook of Cognitive
Psychology. Oxford University Press.
Hills, T. T., Jones, M. N., & Todd, P. T. (2012). Optimal foraging in semantic memory. Psychological Review,
119, 431-440.
Jones, M. N., Johns, B. T., Recchia, G. L. (2012). The role of semantic diversity in lexical organization.
Canadian Journal of Experimental Psychology, 66, 121-132.
Kievit-Kylar, B., & Jones, M. N. (2012). Visualizing multiple word similarity measures. Behavior Research
Methods, 44, 656-674.
Jones, M. N. & Mewhort, D. J. K. (2007). Representing word meaning and order information in a composite
holographic lexicon. Psychological Review, 114, 1-37.
Jones, M. N., Kintsch, W., & Mewhort, D. J. K. (2006). High-dimensional semantic space accounts of priming.
Journal of Memory and Language, 55, 534-552.
Johns, B. T., & Jones, M. N. (2010). Evaluating the random representation assumption of lexical semantics in
cognitive models. Psychonomic Bulletin & Review, 17, 662-672.
Hare, M., Jones, M. N., Thomson, C., Kelly, S., & McRae, K. (2009). Activating event knowledge. Cognition,
111 (2), 151-167.
7. Synergistic Activities
Service: Secretary-Treasurer (4 years) and President (3 years) of Society for Computers in Psychology (SCiP);
Currently Experts and Industry Liaison for SCiP; Currently Editor-in-Chief of Behavior Research Methods, the
leading methods and modeling journal in the field of psychology; Previously Associate Editor for Journal of
Experimental Psychology: General, top journal in field of experimental psychology; NSF panelist on
Computational Cognition/Cyber-Human Systems/Robust Intelligence programs; Member of NSF College of
Reviewers; Chair of Psychonomic Society Digital Content Editor Search Committee; Have organized over a
dozen high-profile symposia at international conferences.
Instruction: Have organized several Big Data workshops and symposia at major conferences, and have given
tutorials to public and industry (including Google, Motorola, PayPal, Intel) on integrating cognitive models with
machine intelligence to tackle practical tasks in industry. I am Editor of Big Data in Cognitive Science: From
Methods to Insights, a book due out at the beginning of next year that contains tutorials on big data techniques
from informatics for cognitive scientists. My students have won several prestigious awards, including the Marr
Award, Castellan Award, and several NSF/NSERC fellowships.
Translational: I have applied my cognitive models to early detection of Alzheimer’s disease from Electronic
Health Records with the IU School of Medicine. Algorithms containing my convolution-based semantic models
are also used in educational tools used in classrooms (embedded in Summary Street™), and in automated
methods for scoring open-ended inference questions in postsecondary education settings. My BEAGLE model is
also being used to mine concepts and map them to brain activations from the full Neuroimaging literature as part
of Neurosynth.org (funded by NIH).
8. Sample Grant Funding (from $5.1 million funding to date)
2015-2018:
IES-Cognition and Student Learning: “Computer-Based Guided Retrieval Practice for
Elelmentary School Children” (Co-PI; PI: Karpicke), $1,499,697.
2012-2016:
NIH-R01: “Large-Scale Automated Synthesis of Human Functional Neuroimaging Data”
(Co-PI; PI: Yarkoni), NIMH Neurotechnology program, $2,754,805.
2011-2016:
NSF-CAREER: “Integrating Perceptual and Linguistic Information in Models of
Semantic Representation” (PI), $453,674 (BCS-1056744).
2010-2012:
Google Research Award: “Exploring Perceptually Grounded Vector Space Models of
Semantic Representation” (PI), $50,000 direct costs.
62
Pestilli, Franco | Indiana University | 2016
Department of Psychological and Brain Sciences, Indiana University, Indiana,
USA Programs in Neuroscience and Cognitive Science, Indiana University Network
Science Institute francopestilli.com | psych.indiana.edu/faculty/franpest.php |
[email protected] | +1 (347) 255 2959
Education
2008 Ph.D. Psychology, Cognition & Perception, New York University, NY Mentor:
Marisa Carrasco. 2006 M.A. Psychology, Cognition & Perception, New York University,
NY. 2000 Laurea. Experimental Psychology (summa cum laude), University of Rome La
Sapienza, ITALY.
Publications (sample from most recent)
1. Takemura, H., Caiafa, C., Wandell, B.A., and Pestilli, F. (2016) Ensemble
tractography. PLoS Computational Biology. DOI: 10.1371/journal.pcbi.1004692
2. Leong, J. Pestilli, F., Wu, C., Samanez-Larkin, G., and Knutson, B. (2016)
Anatomical identification of the white-matter pathways between the NAc to
Insular cortex. Neuron.
3. Aijna, S. Pestilli, F., Rokem, A., and Brigde, H. (2015) Human blindsight is mediated
by an intact geniculo-extrastriate pathway. eLife.
4. Goldstone, R. Pestilli, F., and Börner, K., (2015) Self-portraits of the brain: cognitive
science, data visualization, and communicating brain structure and function.
Trends in Cognitive Science. Cover Article.
5. Caifa, C., and Pestilli, F. Sparse multiway decomposition for analysis and modeling
of diffusion imaging and tractography. http://arxiv.org/abs/1505.07170
6. Pestilli, F. (2015) Test-retest measurements and digital validation for in vivo
neuroscience, Nature: Scientific Data 2 (140057) doi:10.1038/sdata.2014.57.
7. Takemura, H., Yeatman, J. Rokem, A., Winawer, J., Wandell, B. and Pestilli, F.
(2015) A major human white-matter pathway between dorsal and ventral visual
cortex. Cerebral Cortex.
8. Allen, B., Spiegel, D., Thompson, B., Pestilli, F.*, Rokers, B*. (2015) Altered white
matter in visual pathways as a result of amblyopia. Vision Research. *Equal
senior contribution.
9. Saber, G.*, Pestilli, F.* and Curtis, C. (2015) Saccade planning increases
topographic activity in visual cortex. The Journal of Neuroscience. 35(1):245-252.
*Equal contribution.
10.
Gomez, J., Pestilli, F., Witthoft, N., Golarai, G., Liberman, A., Poltoratski, A.,
Yoon, J., Grill-Spector, K. (2015) Development of high-level visual fasciculi
correlates with face perception. Neuron. 85 (1): 216-227.
11.
Rokem, A., Yeatman, J. Pestilli, F. Mezer, A., Wandell, B. Evaluating models of
MRI diffusion. PLOS one.
12.
Pestilli, F., Yeatman, J. Rokem, A., Kay, K. and Wandell, B. (2014) Evaluation
and statistical inference in living connectomes. Nature Methods.
doi:10.1038/nmeth.3098.
63
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
Yeatman, J.D., Weiner, K.S., Pestilli, F., Rokem, A., Mezer, A., Wandell, B.A.
(2014) The vertical occipital fasciculus: A century of controversy resolved by in
vivo measurements. Proceedings of the National Academy of Sciences
111.48:E5214-E5223.
Ling, S., Jehee, J., and Pestilli, F. (2014) A review of the mechanisms by which
attentional feedback shapes visual selectivity. Brain Structure and Function.
doi:10.1007/s00429-014-0818-5.
Main, K.*, Pestilli, F.*, Mezer, A. Yeatman, J. Martin, R. Phipps, S. Wandell, B.
(2014) Speed discrimination predicts word but not pseudo-word reading rate in
adults and children. Brain and Language. *Equal contribution.
Hara, Y., Pestilli, F., and Gardner, JL (2014) Differing predictions for single-units
and neuronal populations of the normalization model of attention. Frontiers in
Computational Neuroscience.
Zheng, C., Pestilli, F., and Rokem, A. (2014) Deconvolution of High Dimensional
Mixtures via Boosting, with Application to Diffusion-Weighted MRI of Human
Brain. Neural Information Processing Systems (NIPS).
Zheng, C., Pestilli, F., and Rokem, A. (2014) Quantifying error in estimates of
human brain fiber directions using Earth Mover’s Distance. arXiv:1411.5271.
Ogawa, S., Takemura, H., Horiguchi, H., Terao, M., Haji, T., Pestilli, F.,
Yeatman, J., Tsuneoka, H., Wandell, B., Masuda, Y. (2014) White matter
consequences of retinal receptor and ganglion cell damage. Investigative
Ophtalmology and Vision Science, IOVS.
Pestilli, F., Heeger, D., Carrasco, M., & Gardner, J. (2011) Attentional
enhancement via selection and pooling of early sensory responses in human
visual cortex. Neuron. 72(5): 832–846.
Pestilli, F., Ling, S., & Carrasco, M. (2009) A population-coding model of
attention’s influence on contrast response: estimating neural effects from
psychophysical data. Vision Research. 49(7): 735-745.
Montagna, B., Pestilli, F., & Carrasco, M. (2009) Attention trades off spatial
acuity. Vision Research.
Ferrera V, Teichert T, Grinband J, Pestilli F., Dashnaw S, & Hirsch J. (2008)
Functional Imaging with Reinforcement, Eyetracking, and Physiological
Monitoring JoVE. 21
Pestilli, F., Viera, G., & Carrasco M. (2007) How do attention and adaptation
affect contrast sensitivity? Journal of Vision. 7 (7):1-12.
Liu T., Pestilli F., & Carrasco M. (2005) Transient attention enhances
performance and fMRI response in human visual cortex. Neuron. 45 (3):
469−47.
Pestilli F. & Carrasco M. (2005) Attention enhances contrast sensitivity at cued
and impairs it at uncued locations. Vision Research. 45 (14): 1867−75.
Research grants
Funded
Title: Improved accuracy for anatomical mapping and network structure of the
Alzheimer’s brain. Source: Indiana Clinical and Translational Sciences Institute
(CTSI) Total Award Amount: $200,000 Dates: 09/01/2015-08/31/2017
64
PI F. Pestilli Collaborators: J. Goñi, Sporns, O., Shen L., Yu-Shien, W. and A. Saykin.
Title: Modern diffusion-weighted MRI protocol and analyses for early profiling and
detection of reading disabilities in preschool children
Source: Indiana Clinical and Translational Sciences Institute (CTSI) Total Award
Amount: $10,000 Dates: 01/01/2016-12/31/2017 PI F. Pestilli and K. James
Pending
Title: Connectome mapping methods to study the computational significance of
largescale brain connection patterns in predicting attention, object processing and
cognitive performance. Source: Office of Naval Research Young Investigator
Program Location: Indiana University
Total Award Amount: $560,000 Dates: 09/01/16-08/31/19 PI F. Pestilli.
Title: NCS-FO: Precision connectome mapping of networks involved in attention and
object processing in individual human brains. Source: National Science Foundation
SMA - IntgStrat Undst Neurl & Cogn Sys Location: Indiana University
Total Award Amount: $871,000 ($845,000 to IU) Dates: 09/01/16-08/31/20 PI F. Pestilli.
co-PI S. Ling (Boston University), Craig Stewart (Indiana University).
Title: Advanced Computational Neuroscience Network (ACNN). Letter of Intent. Source:
National Science Foundation (Special Program from Big Data Hub in the Midwest)
Location: Collaborative Proposal. U. Michigan, Indiana University, Northwestern
University, Ohio State University, Case Western University Total Award Amount:
$1,000,000 ($332,000 to IU) Dates: 09/01/16-08/31/19 PI R. Gonzalez, U. Michigan, coPI Ivo Dinov and George Adler (U. Michigan). PI F. Pestilli. co-PI O. Sporns, A. Saykin,
(Indiana University), Lei Wang (Northwestern University, IL). PI S. Sahoo (Case
Western), DK Panda and X. Lu (Ohio State University)
Title: Harnessing connectome evaluation methods to map brain networks in individual
brains. Source: Brain and Behavior Foundation (NARSAD) Young Investigator
Grant Location: Indiana University Total Award Amount: $70,000
Dates: 09/01/16-08/31/19 PI F. Pestilli.
Title: AitF: A multidimensional framework for brain data representation and machinelearning algorithms development with application to study human connectomes. Source:
National Science Foundation Location: Indiana University
Total Award Amount: $799,420 Dates: 09/01/16-08/31/20 PI F. Pestilli. co-PI M. White
(Indiana University).
65
Biographical Sketch
Michael S. Ryoo
School of Informatics and Computing, Indiana University Bloomington
e-mail: [email protected], tel: +1-812-855-9190
Education
The University of Texas at Austin; Electrical & Computer Engineering; Ph.D., 2008
The University of Texas at Austin; Electrical & Computer Engineering; M.S., 2006
Korea Advanced Institute of Science and Technology (KAIST); Computer Science; B.S., 2004
Appointments
2015–now : Assistant Professor, School of Informatics and Computing, Indiana University Bloomington
2011–2015: Research Staff, Robotics Section, NASA’s Jet Propulsion Laboratory (JPL)
2008–2011: Research Scientist (military service), Electronics and Telecommunications Research Institute
(ETRI), South Korea
Related Publications
1. AJ Piergiovanni, C. Fan, and M. S. Ryoo. Temporal Attention Filters for Human Activity Recognition
in Videos. arXiv:1605.08140.
2. M. S. Ryoo and L. Matthies. First-Person Activity Recognition: Feature, Temporal Structure, and
Prediction. International Journal of Computer Vision (IJCV), 119(3):307–328, September 2016.
3. I. Gori, J. K. Aggawral, L. Matthies, and M. S. Ryoo. Multi-Type Activity Recognition in RobotCentric Scenarios. IEEE Robotics and Automation Letters (RA-L), 1(1): 593-600, January 2016.
(Best Vision Paper from ICRA 2016)
4. M. S. Ryoo, T. J. Fuchs, L. Xia, J. K. Aggarwal, and L. Matthies Robot-Centric Activity Prediction
from First-Person Videos: What Will They Do to Me?. In ACM/IEEE International Conference on
Human-Robot Interaction (HRI), March 2015. (Best Paper Nominee)
5. M. S. Ryoo, B. Rothrock, and L. Matthies. Pooled Motion Features for First-Person Videos. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June
2015.
6. M. S. Ryoo and L. Matthies. First-Person Activity Recognition: What Are They Doing to Me? In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June
2013.
7. M. S. Ryoo. Human Activity Prediction: Early Recognition of Ongoing Activities from Streaming
Videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), November 2011.
Other Significant Publications
(among a total of 30+ conference and 10 journal publications)
1. T. Shu, M. S. Ryoo, and S.-C. Zhu. Learning Social Affordance for Human-Robot Interaction. In
Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI), July 2016.
66
2. L. Xia, I. Gori, J. K. Aggarwal, and M. S. Ryoo. Robot-Centric Activity Recognition from First-Person
RGB-D Videos", In Proceedings of the IEEE Winter Conference on Applications of Computer Vision
(WACV), January 2015.
3. Y. Iwashita, A. Takamine, R. Kurazume, and M. S. Ryoo. First-Person Animal Activity Recognition
from Egocentric Videos. In Proceedings of the International Conference on Pattern Recognition
(ICPR), August 2014.
4. M. S. Ryoo, S. Choi+, J. H. Joung+, J.-Y. Lee+, and W. Yu. Personal Driving Diary: Automated
Recognition of Driving Events from First-Person Videos. Computer Vision and Image Understanding (CVIU), 117(10): 1299-1312, October 2013.
5. J. H. Joung, M. S. Ryoo, S. Choi, and S. R. Kim. Reliable Object Detection and Segmentation Using
Inpainting. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), October 2012.
6. M. S. Ryoo and J. K. Aggarwal. Stochastic Representation and Recognition of High-level Group
Activities. International Journal of Computer Vision (IJCV), 93(2):183–200, June 2011.
7. M. S. Ryoo and W. Yu. One Video is Sufficient? Human Activity Recognition Using Active Video
Composition. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV),
January 2011.
8. M. S. Ryoo+, J. T. Lee+, and J. K Aggarwal. Video Scene Analysis of Interactions between Humans
and Vehicles Using Event Context. Proceedings of the ACM International Conference on Image and
Video Retrieval (CIVR), July 2010.
9. M. S. Ryoo and J. K. Aggarwal. Spatio-Temporal Relationship Match: Video Structure Comparison
for Recognition of Complex Human Activities. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2009.
10. M. S. Ryoo and J. K. Aggarwal. Semantic Representation and Recognition of Continued and Recursive
Human Activities. International Journal of Computer Vision (IJCV), 82(1):1–24, April 2009.
11. M. S. Ryoo and J. K. Aggarwal. Observe-and-Explain: A New Approach for Multiple Hypotheses
Tracking of Humans and Objects. In Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR), June 2008.
12. M. S. Ryoo and J. K. Aggarwal. Robust Human-Computer Interaction System Guiding a User by
Providing Feedback. In Proceedings of the International Joint Conference on Artificial Intelligence
(IJCAI), January 2007.
13. M. S. Ryoo and J. K. Aggarwal. Recognition of Composite Human Activities through Context-Free
Grammar based Representation. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2006.
Synergistic Activities
1. Workshop Organizer. 4th Workshop on Egocentric (First-Person) Vision, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
(Organizers: M. S. Ryoo, K. Kitani, Y. Li, Y. J. Lee)
2. Tutorial Organizer. Tutorial on Emerging Topics in Human Activity Recognition, IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Columbus, OH, June 2014.
(Speakers: M. S. Ryoo, Ivan Laptev, Greg Mori, Sangmin Oh)
67
3. Workshop Organizer. 3rd Workshop on Egocentric (First-Person) Vision, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
(Organizers: K. Kitani, Y. J. Lee, M. S. Ryoo, A. Fathi)
4. Tutorial Organizer. Tutorial on Activity Recognition for Visual Surveillance, IEEE Conference on
Advanced Video and Signal-based Surveillance (AVSS), Beijing, China, September 2012.
(Speakers: M. S. Ryoo, Anthony Hoogs, Arslan Basharat, Sangmin Oh)
5. Tutorial Organizer. Tutorial on Frontiers of Human Activity Analysis, IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Colorado Springs, CO, June 2011.
(Speakers: J. K. Aggarwal, M. S. Ryoo, K. Kitani)
6. Workshop Organizer. ICPR Contest on Semantic Description of Human Activities (SDHA), International Conference on Pattern Recognition (ICPR), August 2010.
(Organizers: M. S. Ryoo, J. K. Aggarwal, A. K. Roy-Chowdhury)
Funding - Current grants and contracts
1. (PI) ICT R&D program of South Korean Ministry of Science, “Recognizing Objects and Events from
Videos for XD-Media Special Effects”, 2016.01˜2018.12, ∼$330,000 for 36 months, with Electronics and Telecommunications Research Institute (ETRI, Korea).
2. (co-PI) DARPA’s Simplifying Complexity in Scientific Discovery (SIMPLEX), Task “Action Recognition and Learning from a First-Person View,” 2015.03˜2018.05, ∼$250,000 for 39 months, with
S.-C. Zhu (UCLA).
3. (PI) ARL’s Robotics Collaborative Technology Alliance (RCTA), Sub-task P5-5 “Human Activity
Recognition with Context Learning,” 2016.01˜2016.12, $60,000 (2017˜2019 funding amount TBD).
Funding - Past funding
1. (PI) NVIDIA hardware donation program, September 2015.
2. (co-PI, subtask-PI) ARL’s Robotics Collaborative Technology Alliance (RCTA), Sub-task P5-2 “Understanding of Human Interactions and Reactions,” Phase1: 2012.04 2014.12, “Semantic Understanding of Human Activities”, Phase2: 2015.01˜2015.12, ∼$500,000, with L. Matthies (JPL).
3. (PI) JPL’s B&P Funding, “Group Activity Recognition from Aerial Videos,” etc., 2013˜2014, $17,000.
4. (PI) Otis Elevator Korea, “Detection of Abnormal Activities in Elevators,” 2011, $60,000.
Collaborators & Other Affiliations
Collaborators and Co-Editors:
Alireza Fathi (Apple Co.); Thomas J. Fuchs (Memorial Sloan Kettering Cancer Center); Yumi Iwashita
(Kyushu University, Japan); Christopher Kanan (Rochester Institute of Technology); Kris Kitani (Carnegie
Mellon University); Yong Jae Lee (UC Davis); Larry Matthies (Jet Propulsion Laboratory); Brandon
Rothrock (Jet Propulsion Laboratory); Lu Xia (Amazon Co.); Song-Chun Zhu (UCLA)
Graduate Advisors: J. K. Aggarwal (University of Texas at Austin)
Graduate Advisees: Alexander Seewald (Indiana University - PhD student); AJ Piergiovanni (Indiana
University - PhD student); Jangwon Lee (Indiana University - PhD student)
68
Martha White
Department of Computer Science and Informatics, Indiana University, Bloomington
150 South Woodlawn Avenue
Bloomington, IN 47405, USA
E-mail: [email protected]
Web: www.informatics.indiana.edu/martha
1. Professional preparation
• B.S., Mathematics, University of Alberta, Edmonton, Canada, 2008.
• B.S., Computing Science, University of Alberta, Edmonton, Canada, 2008.
• M.S., Computing Science, University of Alberta, Edmonton, Canada, 2010.
• Ph.D., Computing Science, University of Alberta, Edmonton, Canada, 2015.
2. Appointments
01/2015 — present
Assistant Professor, Department of Computer Science, School of Informatics and Computing, Indiana University, Bloomington
3. Products
Selected relevant publications
• S. Jain, M. White, P. Radivojac. Estimating the class prior and posterior from noisy positives and
unlabeled data. In Advances in Neural Information Processing Systems (NIPS), 2016.
• R. S. Sutton, A. R. Mahmood and M. White. An Emphatic Approach to the Problem of Off-policy
Temporal-Difference Learning. Journal of Machine Learning Research (JMLR), 2016.
• M. White, J. Wen, M. Bowling and D. Schuurmans. Optimal Estimation of Multivariate ARMA
Models. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI), 2015.
• F. Mirzazadeh, M. White, A. Gyorgy and D. Schuurmans. Scalable Metric Learning for Co-embedding.
In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery
in Databases (ECML PKDD), 2015.
• M. White, Y. Yu, X. Zhang, D. Schuurmans. Convex Multiview Subspace Learning. In Advances in
Neural Information Processing Systems (NIPS), 2012.
Other relevant publications
Clement Gehring, Yangchen Pan and M. White. Incremental Truncated LSTD. In Proceedings of the
Twenty-First International Joint Conference on Artificial Intelligence (IJCAI), 2016.
A. White and M. White. Investigating practical, linear temporal difference learning. In Proceedings
of the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS), 2016.
• J. Veness, M. White, M. Bowling, and A. Gyorgy. Partition Tree Weighting. Data Compression
Conference (DCC), 2013.
• M. White and D. Schuurmans. Generalized Optimal Reverse Prediction. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
• L. Xu, M. White and D. Schuurmans. Optimal Reverse Prediction: A Unified Perspective on Supervised, Unsupervised and Semi-supervised Learning. In Proceedings of the Twenty-Sixth International
Conference on Machine Learning (ICML), 2009. Honorable Mention for Best Paper
1
69
4. Synergistic activities
• Program Committee member for several machine learning conferences, including ICML (2015,2016),
NIPS (2015,2016), AAAI (2015,2016), IJCAI (2014, 2015,2016)
• Reviewer for JMLR, ICML, NIPS, IJCAI, AAAI, AISTATS, Machine Learning Journal, Transactions
on Image Processing, Journal of Autonomous Agents and Multi-agent Systems, Artificial Intelligence
Journal, IEEE Transactions on Neural Networks and Learning Systems
• Served on panels for graduate and undergraduate students, through Women in Technology (CeWIT)
at Indiana University
• Tutored native american students under Frontier College, Edmonton, AB, Canada (2014)
• Workshops for youth, including workshops with Women in Scholarship, Engineering, Science and
Technology (WISEST) and Women in Technology (WIT) promoting diversity in Computing Science
(2011, 2007)
5. Collaborators (past 5 years, alphabetical by last name), total = 10
Bowling Michael (U. of Alberta), Degris Thomas (Google Deepmind), Gyorgy Andras (U. of Alberta),
Pestilli Franco (Indiana U.), Radivojac Predrag (Indiana U.), Schuurmans Dale (U. of Alberta), Sutton
Richard (U. of Alberta), Trosset Michael (Indiana U.), Veness Joel (Google Deepmind), Zhang Xinhua
(NICTA)
6. Current advisees
Ph.D.: Tasneem Alowaisheq, Lei Le, Raksha Kumaraswamy, Yangchen Pan.
7. Ph.D. Advisors
Michael Bowling and Dale Schuurmans, University of Alberta
2
70
OMB No. 0925-0001 and 0925-0002 (Rev. 10/15 Approved Through 10/31/2018)
BIOGRAPHICAL SKETCH
Provide the following information for the Senior/key personnel and other significant contributors.
Follow this format for each person. DO NOT EXCEED FIVE PAGES.
NAME: Yu, Chen
eRA COMMONS USER NAME (credential, e.g., agency login): [email protected]
POSITION TITLE: Professor, Psychological and Brain Sciences, Indiana University at Bloomington
EDUCATION/TRAINING (Begin with baccalaureate or other initial professional education, such as nursing,
include postdoctoral training and residency training if applicable. Add/delete rows as necessary.)
DEGREE
(if
applicable)
Completion
Date
MM/YYYY
Beijing University of Technology
B.E.
07/96
Automation/Robotics
Beijing University of Technology
M.S.
07/99
Automation/Robotics
University of Rochester
M.S.
06/01
Computer Science
University of Rochester
Ph.D.
06/04
Computer Science
INSTITUTION AND LOCATION
FIELD OF STUDY
B. Positions and Selected Honors
Positions:
2015-now Professor, Department of Psychological and Brian Sciences, Cognitive Science Program, School
of Informatics and Computing, Indiana University
2010-2015: Associate Professor, Department of Psychological and Brian Sciences, Cognitive Science
Program, School of Informatics, Indiana University
2004-2009: Assistant Professor, Department of Psychological and Brian Sciences, Department of Computer
Science, Cognitive Science Program, Indiana University.
Other Experience and Professional Memberships
•
•
•
•
•
•
•
Panelist of NIH Cognition and Perception, 2013; NIH R24 Study Session, 2012; NIH F12 Study Session, 2009
Panelist of NSF Robust Intelligence, 2010; NSF Cyber-Enable Discovery and Innovation (CDI), 2008; NSF
Perception, Action and Cognition (PAC), 2006-2008; NSF Human Social Dynamics (HSD), 2006
External reviewer of NSF Developmental and Learning Sciences, 2010, 2011,2012; NSF Perception and Action,
2010, 2011, 2012, 2013; NSF Linguistics Program, 2006,2010
Editorial Board, IEEE Transactions on Autonomous Mental Development, 2014-current
Editorial Board, Infancy, 2013-current
Associate Editor, the Journal Frontiers in Neurorobotics, 2007-current
Associate Editor, the Journal Frontiers in Psychology, 2009 – now
Honors and Awards:
• Intel Best Paper Award, 3rd IEEE International Workshop on Egocentric Vision
2014
• Robert L. Fantz Memorial Award, American Psychological Foundation
2013
• Best Paper of Experiment Combined with Computational Model, IEEE ICDL Conference 2012
• David Marr Prize for the Best Paper, Cognitive Science Society
2012
• Early Distinguished Contribution Award, International Society of Infant Studies
2008
• Outstanding Junior Faculty Award, Indiana University
2008
• Marr Prize for the Best Student-Authored Paper, Cognitive Science Society
2003
• Finalist of Best Cognitive Modeling Paper, Cognitive Science Society
2006
71
.
C. Contribution to Science
My first contribution focuses on studying statistical word learning – how human learners, both adults, infants
and young children, are capable of aggregating statistical information across multiple encounters of words and
referents.
1.
Yu, C. and Smith, L. B. (2007) Rapid Word Learning under Uncertainty via Cross-Situational
Statistics. Psychological Science, 18(5), 414-420.
2.
Smith, L. B. & Yu, C. (2008) Infants rapidly learn word-referent mappings via cross-situational
statistics. Cognition, 106(3), 1558-1568. PMCID: 21585449.
3.
Yu, C. & Smith, L. B. (2012) Modeling Cross-Situational Word-Referent Learning: Prior Questions.
Psychological Review, 119(1), 21-39. PMCID: PMC3892274.
4.
Yu, C. & Smith, L.B. (2011). What You Learn is What You See: Using Eye Movements to Study Infant
Cross-Situational Word Learning. Developmental Science,14(2), 165-180. PMC22213894.
5.
Yurovsky, D., Fricker, D., Yu, C. & Smith, L.B.(2014). The role of partial knowledge in statistical word
learning. Psychonomic Bulletin & Review, 21, 1-22. PMCID: PMC3859809.
My second contribution concerns word learning in social contexts.
1. Yu, C., Ballard, D. H. & Aslin, R.N. (2005) The Role of Embodied Intention in Early Lexical
Acquisition. Cognitive Science, 29(6),961-1005.
2. Yu, C. and Ballard, D. H. (2007) A Unified Model of Early Word Learning: Integrating Statistical and
Social Cues. Neurocomputing, 70(13-15), 2149-2165.
3. Pereira, A.F., Smith, L. B. & Yu, C. (2008) Social Coordination in Toddler's Word Learning: Interacting
Systems of Perception and Action. Connection Science, 20(2-3), 73-89. PMCID: PMC2954513.
4. Pereira, A., Smith, L. B. & Yu, C. (2014) A Bottom-up View of Toddler Word Learning. Psychonomic
Bulletin & Review, 21, 178-185. PMCID: PMC3883952.
5. Yu, C. & Smith, L. B. (2012) Embodied Attention and Word Learning by Toddlers. Cognition, 125, 244262. PMCID: PMC3829203.
My third contribution is in the domain of perception-action coupling in parent-child social interactions. In a set of
studies, we used a novel method that seeks to describe the visual learning environment from a young child's
point of view and measures the visual information that a child perceives in real-time toy play with a parent. The
novel method involves measuring, in high temporal resolutions, eye gaze, hand and head movements, and the
visual field from the child’s point of view (using head cameras).
1. Yu, C., Smith, L.B., Shen, H., Pereira, A.F., & Smith, T.G. (2009). Active Information Selection: Visual
Attention through the Hands. IEEE Transactions on Autonomous Mental Development, Vol2, 141151. PMCID:PMC2964141.
2. Smith, L. B., Yu, C., & Pereira, A. F. (2011) Not your mother's view: the dynamics of toddler visual
experience. Developmental Science, 14:1, 9-17. PMCID: 3050020
3. Yurovsky, D., Smith, L. B. & Yu, C. (2013) Statistical Word Learning at Scale: The Baby's View is Better
Developmental Science, 1-7. PMID: 24118720.
4. Yu, C. & Smith, L.B. (2013) Joint Attention without Gaze Following: Human Infants and Their Parents
Coordinate Visual Attention to Objects through Eye-Hand Coordination. PLoS ONE 8(11): e79659.
doi:10.1371/journal.pone.0079659. PMCID:PMC3827436.
D. Research Support
•
•
•
•
•
•
•
“How the Sensorimotor Dynamics of Early Parent-Child Interactions Build Word Learning Skills” (NIH R01, PI:
Chen Yu, co-PIs: Linda Smith and Jack Bates), $1.6M, Sept 2013- August 2018.
“Collaborative Research on the Development of Visual Object Recognition” (NSF, co-PI, PI: Linda B. Smith),
$700,000, Oct 2015-Sept 2018.
“Cross-Situational Statistical Word Learning: Behaviors, Mechanisms and Constraints” (NIH RO1 HD056029) (PI:
Chen Yu, co-PI: Linda Smith) $978,000 September 2007- August 2013.
“The Sensorimotor Dynamics of Naturalistic Child-Parent Interaction and Word Learning” (NSF BCS-0924248)
(PI: Chen Yu, co-PI: Linda Smith), $472,580, September 2009-August 2013.
“An Information-Theoretic Approach to Coordinated Behavior” (AFOSR FA9550-09-1-0665) (PI: Chen Yu, coPIs: Olaf Sporns and Linda Smith), $300,000 September 2009-August 2011
“Grounding Word Learning in Multimodal Sensorimotor Interaction”, (NSF BCS0544995) (PI: Chen Yu, co-PI:
Linda Smith) $206,810, May 2006 – Oct 2009.
“Embodied attention in toddlers” (co-PI, PI: Linda B. Smith, NICHD R21) $348,000, December 2011 – March
2013.
72
Letters of Support:
Dean of the College of Arts and Sciences –to be submitted directly
Chair of Psychological and Brain Sciences – to be submitted directly
Administrator Letters included here:
1. Dean of the School of Informatics and Computer Science
2. Chair of Informatics
3. Director of the Program of Cognitive Science
Expert outside letters:
Jay McClelland
Lucie Stern Professor in the Social Sciences
Director, Center for Mind, Brain and Computation
Department of Psychology, Stanford University
Bio. Member of the National Academy of Sciences, American Academy of Arts
and Sciences. Jay McClelland received his Ph.D. in Cognitive Psychology from
the University of Pennsylvania in 1975. Over his career, McClelland has
contributed to both the experimental and theoretical literatures in a number of
areas, most notably in the application of connectionist/parallel distributed
processing models to problems in perception, cognitive development, language
learning, and the neurobiology of memory. He was a co-founder with David E.
Rumelhart of the Parallel Distributed Processing (PDP) research group, and
together with Rumelhart he led the effort leading to the publication in 1986 of the
two-volume book, Parallel Distributed Processing, in which the parallel distributed
processing framework was laid out and applied to a wide range of topics in
cognitive psychology and cognitive neuroscience. McClelland and Rumelhart
jointly received the 1993 Howard Crosby Warren Medal from the Society of
Experimental Psychologists, the 1996 Distinguished Scientific Contribution
Award from the American Psychological Association, the 2001 Grawemeyer
Prize in Psychology, and the 2002 IEEE Neural Networks Pioneer Award for this
work.
Jeffrey Elman
Distinguished Professor of Cognitive Science
University of California- San Diego
Member of American Academy of Arts Sciences, Winner Rumelhart Prize.
Jeffrey L. Elman received in PhD from the University of Texas in Linguistics. He
has made multiple major contributions to the theoretical foundations of human
cognition, most notably in the areas of language and development. His work has
had an immense impact across fields as diverse as cognitive science,
psycholinguistics, developmental psychology, evolutionary theory, computer
73
science and linguistics. Elman’s 1990 paper Finding Structure in Time
introduced a new way of thinking about language knowledge, language
processing, and language learning based on distributed representations in
connectionist networks. The paper is listed as one of the 10 most-cited papers in
the field of psychology. He is a fellow of Cognitive Science Society, American
Psychological Society, was President of the Cognitive Science Society, and cofounder of the Kavli Institute at UCSD. He served as Dean of the School of Social
Science at UCSD for a decade. His book Rethinking Innateness, funded by the
MacArthur Foundation, is considered a landmark contribution to the study of
human development.
James Rehg
Professor College of Computing
Georgia Institute of Technology
Dr. Rehg is a leading figure in egocentric vision, wearable sensors and
behavioral imaging. His research interests include computer vision, computer
graphics, machine learning, robotics, and distributed computing. He co-directs
the Computational Perception Laboratory (CPL) and is affiliated with the GVU
Center, Aware Home Research Institute, and the Center for Experimental
Research in Computer Science. He received his Ph.D. from CMU in 1995. He
received an NSF CAREER award in 2001 Dr. Rehg received the 2005 Raytheon
Faculty Fellowship Award from the College of Computing. Dr. Rehg is
also leading a multi-institution effort, funded by an NSF Expedition award, to
develop the science and technology of Behavioral Imaging— the capture and
analysis of social and communicative behavior using multi-modal sensing, to
support the study and treatment of developmental disorders such as autism. e is
the Deputy Director of the NIH Center of Excellence on Mobile Sensor Data-toKnowledge (MD2K), which is developing novel on-body sensing and predictive
analytics for improving health outcomes.
74
Rick Van Kooten
Vice Provost for Research
Carmichael Center, Suite 202
530 E. Kirkwood Avenue
Bloomington, IN 47408
Dear Professor Rick Van Kooten,
September 6, 2016
I am writing to express my enthusiastic support for the Learning: Machines, Brains
and Children proposal to be submitted to IU’s EAR program.
The proposed research is well aligned with current SOIC research and education in
data science, vision, and machine learning. I strongly believe that the combination of
faculty expertise, research and capability is excellent not just at IU, but nationally.
This proposal represents an excellent opportunity not just to impact this important
field nationally, but to provide critical insights to IU to help it achieve successful
outcomes for students, faculty and the university as a whole, using state of the art
data science with unique interplay between computer science, cognitive psychology,
and neuroscience.
SoIC already hosts world leading researchers in data science and machine learning
disciplines spanning the theoretical to the applied, that will be of direct use in this
project.
SoIC has created a cross-campus data science graduate education program with one
of the broadest curricula, strong industry engagements and highest levels of
intellectual diversity in the country. Founded in 2014, the program now has over 500
graduate students; many of these students will directly benefit from the research
proposed in this project. Undergraduate opportunities are now being explored. The
program has attracted attention due to its innovative online and residential options,
very strong relationship with industry including Silicon Valley companies, and a
curriculum encompassing technical and “decision maker” paths. Its student body
includes many high ranked industry professionals as well as traditional STEM
students. At a recent NSF workshop, IU was recognized as a leader in defining the
curricular scope for Data Science nationally and internationally.
75
The proposal focuses on visual learning that requires an interdisciplinary approach
to solve the problem. The potential transfer of knowledge from human visual system
to machine vision is a very compelling strategy. The proposal puts forth a plan to
study the mechanisms of how children learn and mimic it in non-biological machines.
IU has strong existing strength in the interdisciplinary fields it bridges. The proposal
requests faculty in three complimentary areas that can enhance the strength of IU in
this domain.
Professors Smith, Crandall, Goldstone, and James are an impressive team of faculty to
lead this very strong proposal. They are outstanding scientists with impeccable
reputations. I very strongly support this proposal.
Sincerely,
Raj Acharya
76
September 8th, 2016
To whom it may concern:
On behalf of the Informatics Division of the School of Informatics and Computing (SOIC), I write
in strong support of the proposal “Learning: Brains, Machines, and Children,” led by Co-PIs
Linda Smith, David Crandall, Rob Goldstone, and Karin James to the OVPR Emerging Areas of
Research (EAR) program.
The proposal is an ambitious and innovative project to study learning from both computational
and human perspectives, and has the promise to significantly impact both of these fields, and even
to help establish a new multidisciplinary field of the science of learning. On the computational
side, the Co-PIs and collaborators from SOIC include David Crandall, Michael Ryoo, Sriraam
Natarjan, and Martha White, all of whom are highly active researchers with expertise in machine
learning, including the emerging areas of deep learning, reinforcement learning, and computer
vision. All three have very successful external funding records with a total of several million
dollars of external grants and contracts from NSF, NIH, DARPA, IARPA, and the military. On the
human side, Linda Smith, Rob Goldstone, Karin James, Mihcael Jones, David Landy, Franco
Pestilli, Olaf Sporns, and Chen Yu are well-known experts in their fields with strong research
records, including histories of successful long-running collaborations and involvement with SOIC.
The highly interdisciplinary nature of the proposed project aligns well with our Department’s
vision for the future of computing research and education. In fact, this is the type of forwardthinking project that might not be possible in more traditional computer science departments that
were not established on a foundation of interdisciplinarity. This unique characteristic of IU and
SOIC in particular, and the fact that the project would build on three of the university’s most wellrespected programs (Psychological and Brain Sciences, Cognitive Science, and Informatics and
77
919 E. Tenth Street, Bloomington, IN 47408 • Phone: +1 (812) 856-5754 • Fax: +1 (812) 856-3825
Computing), and the strong research records of the Co-PIs and collaborators put IU on a strong
trajectory to be the leader in this new emerging field of the science of learning. If awarded, this
EAR proposal would make it possible for IU to realize this potential promise, by for example
hiring three faculty and three postdocs with interdisciplinary training in machine and human
learning.
Please feel free to contact me should you have any questions.
Sincerely,
Erik Stolterman
Chair of Informatics and Professor in Informatics
School of Informatics and Computing
Indiana University, Bloomington
919 E 10th Street, Bloomington, IN, 47408
[email protected]
812 856 5803
78
919 E. Tenth Street, Bloomington, IN 47408 • Phone: +1 (812) 856-5754 • Fax: +1 (812) 856-3825
Dr. Peter M. Todd
Cognitive Science Program
1101 E. 10th Street
Bloomington, IN 47405 USA
Email: [email protected]
Web: psych.indiana.edu/faculty/pmtodd.php
Lab: www.indiana.edu/~abcwest/
Phone : (812) 855-3914
Fax : (812) 856-1995
6 September 2016
Dear colleagues:
I write as Director of the IU Cognitive Science Program in strong support of the EAR proposal
on Learning: Machines, Brains, and Children. This proposal embodies the interdisciplinary
approach to tackling important new research challenges from multiple perspectives that
Cognitive Science is all about. It combines three of the core fields of Cognitive Science:
neuroscience, cognitive psychology, and AI/computer science—and in fact all but two of the
team members are faculty long affiliated with our Cognitive Science Program (and I expect those
two will soon be affiliated as well). Thus the Cognitive Science Program is already deeply
involved in this proposal, and will support the efforts of this team going forward with the
resources we have available. But we also stand to gain greatly from this proposal: The new
faculty, postdoctoral researchers, and graduate students to be hired and supported will all be in
fields central to Cognitive Science and will contribute significantly to the thriving collaborative
culture and research productivity of our top-rated program.
The basic premise of this proposal is original, exciting, and impactful: to further our
understanding of learning systems, and increase our ability to build new such systems, by
studying the principles driving the most advanced learning system we know—the developing
human brain. Humans grow to be better and better at learning by reusing existing cognitive
mechanisms in new domains, by selecting for themselves the inputs that they will learn from,
and by learning to perceive and form concepts at the same time. The research proposed here will
uncover the way these principles work, and will use those principles to guide the development of
new machine learning algorithms that increasingly drive our information technology and
economy.
The team behind this proposal comprises internationally-renowned leaders in their fields, many
of whom have already collaborated extensively, and together they have laid a considerable
foundation for the research to be done. But to make the necessary big push on these important
questions, the three new faculty hires are needed in areas of machine learning and computational
neuroscience that are not currently covered here at IU. The postdocs and graduate students to be
funded will further make connections between PIs and labs that are certain to generate even more
new research avenues in unanticipated directions. Our existing Cognitive Science Program
infrastructure will assist in their training and introduction to the rest of the IU community.
79
I fully expect the research from this proposal to make a profound impact on our basic scientific
understanding of learning in human and artificial systems, and to make lasting contributions to
applications of machine learning in a wide range of domains including education and commerce.
It will also make IU a go-to hub for this kind of vital work. I most strongly recommend this
proposal for your consideration and support.
Sincerely,
Dr. Peter M. Todd
Director of the Cognitive Science Program
Provost Professor, Cognitive Science, Psychology, and Informatics
80
81
82
DEPARTMENT OF COGNITIVE SCIENCE
OFFICE: (858) 534-1147
FAX:
(858) 534-7394
Web:
http://crl.ucsd.edu/~elman
9500 GILMAN DRIVE
LA JOLLA, CALIFORNIA 92093-0501
`
September 4, 2016
Dear Emerging Area Review Committee,
Linda Smith asked me write of a letter of support for her team’s Emerging Area
proposal, which I understand is a university-wide competition to support hiring and research
in new high impact fields of inquiry. To anticipate the punchline: This is one of the most
exciting proposals of this sort I have ever seen. It is outstanding, ambitious, and makes
enormous sense. The proposal comes from a group of scientists at the top of their fields,
and I think has a real likelihood of leading to high impact work that could change cognitive
science. I realize this is an unusually exuberant endorsement, so let me explain.
The proposal argues persuasively that cognitive modeling, computational
neuroscience and machine learning need to take development seriously. Given the focus on
learning that those two fields have traditionally had, it is odd that there has been such a
disconnect between them and developmental research. I say this as someone who came
rather late in my own career to appreciating that development holds the key to
understanding how complex behaviors emerge from often non-obvious origins.
The planned approach has three components, and they fit together in a way that is
insightful. The first set of projects uses deep-learning networks to understand reuse and the
developmental cascade. The approach is unprecedented in its use of training sets in
machine learning that are based on the statistical structure children’s visual experiences as
they progress from infancy to early childhood. The approach is also innovative in its parallel
experiments with children and deep-learning networks and its ability in both cases to “look
under the hood” through the use of neuroimaging and analyses of the internal workings of
the networks.
The second project builds on the pioneering developmental research at Indiana,
which I consider to be some of the most exciting in the world. This project attempts to
capture the multi-scale dynamics of real-world learning experiences, over both the macroscale of development and the micro-scale of fractions of seconds. The planned machinelearning component requires multiple time scales in the algorithms, which in turn requires a
major (and much anticipated) advance in machine learning.
Finally, the third component of the proposal goes after what is perhaps the biggest
gap in contemporary understanding in human cognition and machine learning, which is how
perceptual development with its increasing narrowing, precision, and categorization
prepares the way for more advanced abstract learning and concepts.
The proposal is stunning in its conceptualization of the problem to be solved and
clear in its plan to solve that problem.
83
The unique strategy of injecting developmental approaches and research into
computational modeling in neuroscience, cognitive modeling and machine learning has the
potential for field-changing advances in each of these areas. It also has the potential to
bring these separate areas together to create a field of research that seamlessly combines
all three to find the general principles of learning that all these areas have been working to
discover, but hitherto in isolation.
Perhaps most importantly, I believe that this research program has consequences
for the lives of children. The evidence is now quite clear that early inequalities in children’s
environments have effects—in brain, in cognition, in behavior—by the third birthday and that
by this point some children’s futures may already be firmly constrained as these early
inequalities appear to the set the path for success in school. What we do not know is the
mechanistic pathway: How are those early experiences changing internal processes?
What is the pathway from the early effects to later learning? These are the larger questions
that motivate this whole program and without answers we cannot begin address this critical
societal problem.
Indiana is unique in its complex systems perspective on development. The cognitive
science program at IU is one of the best in the world. The program has a long history of
landmark contributions that combine computational, cognitive, and neuroscience
approaches. I am enthusiastic and confident that this team–with additional investments
from Indiana University--can lead the field in this new approach.
Finally, the time is right for universities to invest in a big way in this research. Many
believe that the truly amazing advances in human neuroscience and in machine learning
put us at a tipping point for a new insights into how systems learn. Recent calls from
federal funding agencies certainly exhibit this belief. But if we want systems that learn like
people, then development has to be in the mix as well.
In sum, this is a tremendously exciting proposal. There are very few other
universities that are as well positioned to undertake such an ambitious program as
Indiana University. I look forward to the discoveries that will come from this new
collaborative research area.
Yours truly,
Jeffrey L. Elman
Chancellor’s Associates Distinguished Professor of Cognitive Science
Founding Co-Director, Kavli Institute for Brain & Mind
Dean Emeritus, Division of Social Sciences
84
Dear Emerging Area Review Committee,
This letter is written in support of the proposal titled “Learning: Brains, Machines and Children” by
Linda Smith and a team of interdisciplinary faculty at Indiana. I understand that the proposal is being
submitted to an internal Emerging Area competition to decide new investments by the university in new
high impact research areas. In brief, this is a visionary proposal with the potential field-changing
impacts in the study of human visual learning and in machine vision. I should say at the outset that I
have a vested interest in all of this. Linda Smith, Chen Yu and I are currently funded by a collaborative
NSF grant that specifically seeks to use deep learning nets (DNNs) to understand how changes in
toddlers’ visual experiences around the first birthday, and changes in visual object recognition, may be
rate-limiting factors in the ability of toddlers to break into object name learning. Already, our findings
challenge current machine learning approaches which cannot readily exploit the structure in toddler
visual experiences.
The overall plan of this ambitious proposal is to support and cultivate collaborative research focused on
visual cognition as it relates to object recognition, early word learning, symbol learning and
mathematics. These goals raise the bar for machine vision considerably. The proposed approach is
novel and innovative in at least three ways: (1) in use of training sets in machine learning based on the
statistical structure children’s visual experiences; (2) in its parallel experiments with children and deeplearning networks and its use of neuroimaging to look under the hood” in human learning; (3) its focus
on how DNNS learn –with analyses of learning trajectories within and across layers. Goal three is
ambitious with likely far reaching consequences and quite rightly this is the focus of the proposal for
three new hires, all in or connected to machine learning. The goal is to move the field from simply
demonstrating useable algorithms (although the benefits of these gains in machine learning to society
are notable in their own right) to an understanding of the principles –expressable as algorithms – that
underlie both human and machine learning. Understanding the similarities and differences – and
aligning them – of learning by humans and machines is highly relevant to building networks of humans
and machines that can work (and learn) seamlessly together and from each other.
There are now emerging a quite exciting cohort of computational theorists and machine learners
working across human and machine learning and explicitly in exploiting recent advances in neurocomputation and human learning. This proposal, and most critically the proposed hires, will put Indiana
in the game, a game that I sincerely believe will dominate all of science (as well as social science) over
the next decade. I believe it’s important to emphasize the need to take steps now to grow capabilities in
this emerging area. There is an opportunity to act which will not last indefinitely.
Indiana has unique and remarkable strengths in its developmental perspective and a unique opportunity
to lead in this new field. I am enthusiastic and confident that this team –with additional investments
from Indiana University -- can lead the field in this new approach. In sum, this is a very exciting
proposal. I look forward to the discoveries that will come from this new collaborative research area.
Sincerely,
Dr. James M. Rehg,
Professor, School of Interactive Computing
Director, Center for Behavior Imaging
85
September 9, 2016
Rick Van Kooten, Vice Provost for Research
Office of the Vice Provost for Research
Carmichael Center, Suite 202-204
530 E. Kirkwood Avenue
Bloomington, IN 47408
Dear Vice Provost Van Kooten,
The three divisions and the three schools in the College of Arts and Sciences put forward
a total of 39 pre-proposal abstracts. Of these 39 pre-proposals, 17 were developed into
full proposals. The majority of the 17 investigators met with at least one dean in the
College prior to submitting their proposals. We will focus on these 17 proposals in this
letter of support.
Below is a ranking of proposals into 3 categories. Three proposals were ranked in the
highest category. To be ranked in the highest category the proposal had to demonstrate a
coherent strategy to meet fully the EAR objectives to “support investigators and teams of
investigators who are prepared to undertake a significant and complex investigation
involving fresh approaches to research. The research or creative activity that is emerging
or under development should capitalize on existing strengths on campus; have the
potential for enhancing the volume, quality, impact and reputation of research at IU
Bloomington; and lead to federal, corporate, or private funding.” Thus, the primary
criterion to be ranked in the high category is research excellence and we favored
proposals led by faculty who have an established record of excellence in research or
creative activity.
In addition, the highest category proposals also met wider College objectives. In
particular, we favored proposals that came not only from strong individual faculty
members, but those from strong departments where a new emerging area of research
builds on our current strengths and where there is demonstrated success in collective
vision and action. In addition to research excellence, we also supported proposals where
Owen Hall 790 E. Kirkwood Avenue Bloomington, IN 47405-7104 (812) 855-1646 fax (812) 855-2060
86
87
there are excellent graduate and undergraduate programs that place their graduate
students in top institutions and where there is high demand for undergraduate enrollment.
Proposals in the second category are interesting and provocative in their approach to
emerging areas of research and have our support, but holistically are not as strong as
those in the first category in at least one EAR or broader College objectives category. All
of the proposals in the third category we find to have significant merits, but are not
evaluated as high on the EAR or College objectives categories as those in the first 2
groups.
v Category 1 (highest ranked, in no particular order)
I am happy to provide my strongest support to the following Emerging Areas of Research
proposals:
Learning: Machines, Brains, and Children, submitted by a group led by Professor
Linda Smith of the Department of Psychological and Brain Sciences. The program
outlined in this proposal promises to place IU at the forefront in a growing area of
convergence in the disciplines of machine learning and developmental psychology. It
builds upon strong existing programs in both developmental psychology in the College
and machine learning and computer science in the SoIC; a number of our most
accomplished researchers are team members. Unlike many EAR proposals, this proposal
arose organically from discussions between team members on this topic which began
well before the EAR call, which speaks to the likely long term sustainability of this
effort. The additional faculty and post-doc hires funded through the EAR program would
take a strong research effort that has already made impressive strides towards forging the
necessary cross-disciplinary ties, and make IU a world leader in the development of
machine learning tools derived from advances in early childhood development.
Please let me know if you have questions about any of this. Let me take this opportunity
to thank you for this great opportunity.
Sincerely yours,
Larry D. Singell
Executive Dean
2
September 8, 2016
Dear Vice Provost Van Kooten and members of the EAR Review Committee,
I’m writing to offer my strong and enthusiastic support for the Emerging Areas of Research (EAR)
proposal titled “Learning: Machines, Brains, and Children.” I have discussed this proposal with Linda
Smith, one of the Co-PIs to consider the direction of this proposal, its alignment with our department’s
strengths and ambitions, and the involvement of our faculty and students. On all these fronts, I see
tremendous opportunities for synergy with department priorities. In addition, I see extraordinary
opportunity to position Indiana University as a national leader in this important and cutting-edge area of
scholarship. In my humble opinion this proposal would be an excellent investment for buttressing
connections to SoIC and the nascent Department of Intelligent Systems Engineering through the
traditional areas of strengths in our department represented in this proposal. The result would be a highly
innovative program that would certainly establish IU as a national leader in this area.
Significance and funding potential. As described in the proposal, this emerging area of research is
positioned to have huge impacts on artificial intelligence, decision science, big data, industry, and human
well-being. And, there is increasing awareness that the big breakthroughs will, as the proposal states,
come from research at the intersection of human learning, neuroscience, and machine learning—three
research domains that have, unfortunately, drifted apart at many institutions. IU has an opportunity here to
compellingly unify these areas using this stellar team as a foundation. The proposed hires would propel
this area of research at IU to national prominence. Moreover, this team, given their strong history of
external funding and the importance of this work, will be well-positioned to garner external funding from
the National Science Foundation, Department of Defense, Office of Naval Research, and the Defense
Advanced Research Projects Agency (DARPA).
Alignment to our department’s goals and strengths. The IU Department of Psychological and Brain
Sciences is deeply committed to the study of learning from multiple perspectives: behavioral, cognitive,
developmental, neural, and computational. Our strengths in this area are manifest in the strong research
88
programs of the PBS co-authors of this proposal: Linda Smith, Chen Yu, Karin James, Rob Goldstone,
David Land, Mike Jones, and Olaf Sporns. In addition, our strengths in this area will be a terrific
recruiting tool and incentive for prospective hires to come to IU. Linda Smith and Rob Goldstone are
members of the American Academy of Science and Olaf Sporns is one of the most renowned
computational neuroscientists in the world. But please don’t get me wrong, this team proposes is to
leverage these strengths to build something that doesn’t currently exist here at IU and that addresses a
major knowledge and methodological gap in the field. And, this proposal is much bigger and ambitious
than anything that could be done within any unit or department on campus and cleverly leverages a
transdisciplinary approach.
Support for proposed hire(s) in PBS. For the reasons described above, I strongly support the proposed
faculty hire(s) in the Department of Psychological and Brain Sciences (PBS). One hire is slated for PBS
and another will be in SoIC. The team proposes that the third hire will be split between the PBS/College
or placed entirely in one or the other unit—depending on fit. These PBS-related hires are entirely
consistent with our department’s vision for strategic hiring and research. Moreover, given the research
focus of the targeted hires, these hires will build even stronger interdisciplinary bridges between
PBS/College and the SoIC.
Additional PBS support for infrastructure. PBS is also committed to making space available for the
Smart Room and assisting with renovation costs.
Conclusion. This project has my strongest and most enthusiastic support. If you have questions that I
might be able to answer, please do not hesitate to email ([email protected]) or call (812-855-2620).
Sincerely,
William P. Hetrick, PhD.
Professor and Department Chair
Department of Psychological and Brain Sciences
College of Arts and Sciences
Indiana University Bloomington
89