Scot Sutherland Sun, Jun 28, 2015 at 3

Scot Sutherland <[email protected]>
To: Scot Sutherland <[email protected]>
Sun, Jun 28, 2015 at 3:54 PM
Hello Dr. Wise,
In response to the very helpful reviews I have done extensive revisions to the document as follows:
To address the disconnect between language referring to "cognitive
analytics" and function of the tool, I have renamed the article and the
tool constraint-referenced analytics to reflect its design and function
leaving what it measures to discussion.
To frame the study I moved away from literature on equivalence to sense
making and recent discussions of the relation between conceptual and
procedural understanding in mathematics which is central to the additional
analysis included in the revised report.
I grounded the complexity model more firmly in its computer science roots
to show the connection between constraint-based approaches and semantic
parsers. I admit that the complexity model as it is currently designed is
very rudimentary, leaving future studies to explore the possibilities of
constraint-referencing for all kinds of meaning making analytics.
The most substantial changes addressed the fourth concern about how the the
analytics tool might provide feedback for teachers and students. I
shortened the section on validating the tool and greatly expanded the
analysis to include moving averages that show performance over time. The
new section illustrates how the analytics tools revealed the conditions
under which learning occurred for these students.
You will also find the article to be much shorter.
Sincerely,
Scot M. Sutherland, Ph.D.
On Tue, May 12, 2015 at 12:42 PM, Alyssa Wise <[email protected]> wrote:
Dear Dr. Sutherland,
I wanted to check in with you to confirm that you had received the editor decision on your
manuscript "Cognitive Analytics of Algebra Learning" submitted to the Special Issue on Learning
Analytics and Learning Theory in the Journal of Learning Analytics (copied below) and inquire as to
whether you and your co-authors were still interested in including the article in the special issue and
would be able to submit the revised version by June 26th, 2015.
Best,
Alyssa
---------------------------------Dr. Alyssa Wise
Associate Professor, Faculty of Education
Coordinator, Educational Technology & Learning Design Program
Simon Fraser University
250-13450 102nd Avenue
Surrey, BC V3T 0A3
Canada
p: 778-782-8046
e: [email protected]
-------------------------------------- Original Message ----From: "Alyssa Wise" <[email protected]>
To: "Dr. Scot McRobert Sutherland" <[email protected]>
Cc: "Tobin F White" <[email protected]>
Sent: Monday, April 27, 2015 4:11:08 PM
Subject: [JLA] Editor Decision
Dear Dr. Scot McRobert Sutherland,
Thank you for submitting your manuscript, Cognitive Analytics of Algebra
Learning, to the Special Issue on Learning Analytics and Learning Theory in
the Journal of Learning Analytics.
Your article was sent for peer review, and based on the reviews we received
we are hoping to include the manuscript in the special issue. However, as
you'll see from the reviewer's comments, we'd like to ask you to make some
significant revisions to the current version.
As you are revising the manuscript, please carefully consider the reviewer's
specific comments. Overall our assessment is that the most critical issues
to focus on are (1) the disconnect between language that refers to
“cognitive analytics” but measures that asses task/transformation
complexity and success/failure, and not learner’s cognitive efforts or
processes directly; (2) general references to constructivism but empirical
work which does not address processes of reasoning, sense-making or
understanding of equivalence; (3) complexity in expressions is not
conceptualized / operationalized in a sufficiently grounded way in line with
current thinking in the field; (4) the question of in what way these
analytics are diagnostic in informing feedback to learners and/or
teacher’s pedagogy. In addition, the introductory sections would benefit
from a clearer and more focused treatment of only the core issues to be
addressed by the work.
After you've had a chance to read the reviews, please let us know if you are
still interested in including the article in the special issue and -equally important -- whether you will be able to submit the revised version
by June 26th, 2015. Because of the timing of for the JLA Special Issue, we
will only be conducting one round of revisions, so we'll make a final
decision about publication based your revised manuscript at that point.
We also want to let you know of an exciting development for the special
issue. As you know, our hope is that this collection of articles will help
the field address challenge of creating learning theory informed analytics.
As you surely know from your own work, this is no small task. To highlight
the contribution each of the articles in the special issue is making to this
challenge, we are planning to invite a short commentary piece for each
article, designed to highlight what the work has successfully accomplished
in connecting learning theory with learning analytics and also discussing
how future work can build on the work being done. We hope you are as excited
about this special format as we are; if you have any questions, please feel
free to get in touch.
Sincerely,
Alyssa Wise (Simon Fraser University)
David Shaffer (University of Wisconsin-Madison)
Special Issue on Learning Analytics and Learning Theory Editors
Journal of Learning Analytics
-----------------------------------------------------Reviewer A:
The manuscript described a proposed method for embedded assessment of
mathematics learners. The presentation is clear. I am not an expert on LA,
but I had certain concerns about this submission. For the most, my concerns
were with respect to study’s rationale, innovation, and contribution.
The introductory material of the paper raised for me questions as to the
novelty and/or contribution of the proposed cognitive-analytics approach. As
I read on, I also became worried about the validity of the numerous
assumptions that went into the coding algebraic propositions.
The Author refers to the constructivist perspective as a theoretical
resource that was selected for this study as a means of assessing the
difficulty of substituting one algebraic expression by another. It is
difficult to understand how constructivism – a philosophy of learning
process -- is evoked here so pervasively throughout the paper, given that
the study is focused on assessment rather than on learning process. At no
point in the entire paper is any attention given to processes of
sense-making, reasoning, and learning. Rather, the rationale is identical to
the rationale of familiar cognitive tutors, even if the underlying machinery
is constraints-based rather than product-based. As such, this paper presents
a technique that would not push the field forward. The field is moving
toward embedded formative assessment for learning. Here we have embedded
stealth assessment for testing.
The author differentiates his work from views of learning as passive
knowledge acquisition. But I wonder if this ‘cognitive analytics’ system
is indeed now or if it is rather the same system as before only with higher
frequency of testing for knowledge.
I note, too, that the Author is referencing conceptual models used in
cognitive tutors. A problem with cognitive tutors is that they are based on
a Skinnerian conceptualization of learning machines. Cognitive tutors
operate on the premise that learning is the optimization of solution
algorithm—there is little to no treatment of meaning. See for example the
recent collaborations of Pat Thompson (ASU) with measurement experts trying
to pin down mathematics teachers’ understanding of content. The field has
moved on.
I found the section on Mathematical Task Complexity somewhat inchoate. In
particular, I was not sure whether task complexity is an objective or
subjective measure. Also, I was not sure how task complexity relates to
Rossi’s point on confusion between similar (yet very different) symbolic
structures, a phenomenon that the author dubs as applying an irrelevant
procedure.
Ultimately I worry that the argument put forth in this manuscript is
circular. The argument is that students’ difficulty with algebra content,
as measured in this assessment study, was more or less as expected from the
analysis of task difficulty. I am not sure how to regard this finding as
lending new insight either on learning or on assessment, and I cannot see
the potential of this direction. These days, there are analytic tools of
increasing popularity for capturing learning as it is occurring rather that
as a sequence of high-frequency assessments of what is easy or difficult. It
is not clear what this ms/’s contribution might be to educational research
and, in particular, to learning analytics.
My concern is that this work is not necessarily taking us forward. I’m
thinking of other work out there now, such as the Zack Pardos, Ryan Baker,
Paulo Blikstein, Robert Mislevy, and others, working on multimodal learning
analytics, figuring out the telemetry of embodied-interaction interfaces,
measuring affect and engagement, and ironing the kinks between
high-frequency formative assessment and tutorial practices. Where is this
manuscript wrt to these various ground-breaking efforts? I worry that it is
not current.
The author writes on p. 8: “The structure of the original algebraic
expression and the resulting expression are modelled by counting elements
present in the expression (i.e. constant terms, linear terms, quadratic
terms, negative symbols, parentheses pairs, etc.). The difference between
the two expressions along each of these vectors provides a model of the
transformation attempt. ” This is a confusing statement. How can
“structure” be modeled as a single value or string of values? Also,
according to this rationale, “x + x + x + x + x + x + x + x + x + x =
10” is “structurally” more complex than “5x + 5x = 10”. Does that
make sense?
Next, the author writes: “A constraint-based parser determines whether the
transformation was equivalent and derives the mathematical characteristics
of the two expressions and the transformation (i.e. symbolic complexity,
proximity to simplest terms, types of elements, etc.). ” That is not
clear. How can the constraints-based parser, which apparently responds only
when an equivalence has been violated, derive the mathematical
characteristics of the two expressions and the transformation? And how does
this value compare to the “structural” analysis?
What is “construct theory”? I did not understand this sentence:
“Construct validity is established if the method of measurement produces
systematic results that support the construct theory”
The author writes: “Studies show that a sophisticated understanding of
equivalence is associated with success in algebra (Ball et al., 2003; Kieran
& Sfard, 1999; Knuth et al., 2005; Saldanha & Kieran, 2005; Steinberg et
al., 1991)”. And yet the proposed methodology is NOT at all about
sophisticated understanding of equivalence. Rather, it is about rote
production of algorithm steps. That certainly an important competence. But
it’s not about sophisticated understanding. These researchers that are
cited here, as well as others who are not cited, are looking at deeper
issues, such the implicit dynamical image schemas by which students approach
a mathematical proposition, e.g., as left-to-right arithmetic operation or
as a balance scale.
In the last paragraph before Methods, the author speaks of cognitive
analytics as though this is the first embedded assessment method. That is
grossly inaccurate.
The author writes, “Symbolic complexity of each expression is calculated
by summing all the elements together. This approach borrows from
constructivist notions that the meaning and quantity of the symbols
influences the complexity of the task. Cognitive complexity is determined
by counting the feature-types present, reflecting the cognitive
constructivist perspective that elements of the expression are formed into
schema or chunks. The number of different kinds of schema that must be
coordinated in working memory determines the complexity of the task. A
summary of these functions is listed in Table 4.” --- I find it very it
difficult to accept these categorizations of rote steps in an algorithmic
procedure as somehow affiliated with a “constructivist” or “cognitive
constructivist” perspective. The use of “schema” in relation to
“features” does not appear to adhere with the work of Jean Piaget.
Perhaps the author appears to be imputing his own mathematical readings of a
symbolic expression to all students, whereas in fact students do not at all
‘see’ these expressions as such.
As for the various charts and tables. I worried that the author may be
ignoring the base-distribution of items by difficulty. For example the bar
charts that count the number of attempts per difficulty level should be
normalized by showing percentage not frequency.
On p. 22 the author write, “cognitive analytics system produced metrics
capable of measuring cognitive processes when transforming algebraic
expressions. ” Again, I do not think this method is measure process at
all. All we learn is that the more difficult a problem, the more likely an
error. That is hardly news.
-----------------------------------------------------Reviewer D:
The paper introduces metrics for measuring cognitive difficulty of algebraic
transformations. Next it shows that when metrics are applied on the log data
capturing students’ attempts to transform algebraic expressions the
success rate decreases for more complex expressions.
The paper has several components. It starts with reviewing general
requirements for creating a measurement tool and then it focuses on sources
of difficulty in learning algebra. It introduces constructs of mathematical
and symbolic complexities and discusses structural and operational duality
as a source of difficulty for the transition from arithmetic to algebra.
Next, it ventures into describing Cognitive complexity through “streams of
research measuring cognition” (page 5), however, it is not clear at that
point, neither it is made clear later, in what way this part contributes to
the proposed constructs. In particular, I did not see how paragraph on
research in cognitive neuroscience is contributing to the paper.
In the next part the paper aims to coin the term “cognitive analytics”.
It states “The cognitive analytics tool developed for this study monitors
cognitive processes by measuring responses against a model of the rules for
transforming algebraic expressions encoded as a system of constraints.”
First, it builds on idea of constraints for transformations of algebraic
expressions (similar to constraint-based ITS, as described). The second
proposition is more problematic as the author claims that Cognitive
analytics combines Network analytics with a constraint-based model. However,
the justification for this is that the system “monitors network
activity”. This is rather a weak connection, I would expect to build on
main concepts of constructs of the networks analytics explicitly. Indeed,
this is not confirmed when reviewing the actual experimental setup. Although
the setup is using a network and pairs of students working together, the
unit of analysis is an algebraic expression and attempts to transform it by
a SINGLE student. Hence this proposition is not further utilized for the
concept of Cognitive analytics.
As a result, I see the proposed cognitive analytics concept to comprise of
the metrics for measuring the complexity of the transformed algebraic
expressions only. Cognitive analytics = metrics for expressions and their
transformations.
The rest of the paper shows that the proposed metrics indeed reflect the
cognitive challenge of the tasks, as the higher metrics values result in
lower success rate. Is this worth calling the system “Cognitive
Analytics”. I would argue that it is not. Two main points are: 1) the
metrics are those for the formulas, and they do not depend on the learner
effort; 2) given the starting formula and resulting formula (attempted to
convert to) we can compute the metrics values without having any learners
present. In my opinion, these are good metrics to predict which tasks will
create more challenge for students than others, but these metrics do not
MEASSURE cognitive effort. I understand that this is just my opinion and
further discussion is certainly warranted.
The paper follows with the Methods section. Two systems are described here
that were used in the classroom settings. The description of Terms and
Operations needs improvement. In particular it is not clear how students
interact with the system, whether they work either with the display or only
with their calculators, or both. Also it is not clear in which part the
system updates “collective expressions”. For both systems rather than
showing screenshots of the calculator I would suggest taking high res
picture of the real calculator. A flowchart of the process students go
through with activity steps would add clarity of what students do and when
the system intervenes (if at all).
Secondly, two highschool classes the data was collected from are referred to
as Period 1 and Period 3 without any explanation, which brings confusion and
raises some questions. For example, was there Period 2 that is not reported?
Page 15 (top) reports approximately 2800 attempts – report exact number.
How this number relates to the numbers reported in Table 6 on page 16? They
both refer to “equivalent transformation attempts”.
There is quite significant repetition of some content, such as paragraph on
page 17 “Studies identify…” and “modeling expression complexity”
on page 15. Similarly, “symbolic complexity model” on page 17 has been
described at least once before. These paragraphs do not add to explaining
the results and should be removed.
On page 21, the last paragraph in “Class comparisons” states: “The
results show that CR analytics were a reliable indicator of the influence of
symbolic and cognitive complexity on success rate across these classes in
the sense that increasing levels of complexity resulted in a systematic
decrease in success rate and the rate of decline was similar for both
classes.” This is a fair conclusion. However, the following sentence:
“They also indicate that the cognitive analytics system produced metrics
capable of measuring cognitive processes when transforming algebraic
expressions.” I do not agree with this statement. The reason is that what
the proposed Cognitive Analytics system did was to count the success rate
for transformations of different complexity. What it established is that the
metrics reflect the complexity of the transformations. I do not see how the
conclusion about measuring cognitive processes is being made. When I observe
an individual student, how can I use the system proposed here to measure her
cognitive processes? I can observe her over the series of attempts at
different complexity levels, but as the system stands right now I would not
be able to say much about her cognitive processes. I think this conclusion
is premature and should be removed.
In the discussion section, paragraph 3 highlights Cognitive analytics
non-interruptive nature and then goes on to say that feedbacks need to be
studied. I do not see it any different from other types of LA. I suggest
rewriting the paragraph.
To conclude: an interesting work is being done here. The analysis of the
data is well done from the technical standpoint. The proposed concept of
cognitive analytics is not presented convincingly and clearly enough to
warrant coining of this new term within the context of this work as it is
reported.
-----------------------------------------------------Reviewer E:
The author needs to use a little space upfront to explain to the reader
what this paper is about. There is not enough provided in the introduction
to help a reader make sense of why the various parts of the literature
review are present and how everything fits together. For example, does this
study use cognitive tutoring? If so, how does it extend from the work
described in the literature review? If not, how should the reader be
understanding the importance of cognitive tutors to the present study?
While I can appreciate that the author is trying to be concise in presenting
the literature review on Sources of Difficulty for Algebra Learners, that
conciseness has come at a cost in terms of precision. The author presents a
number of assertions that are broad generalizations of traditional
mathematics. While traditional mathematics instruction has a stronghold on
teaching and learning, there are many programs that do not subscribe to it.
Further, with the implementation of the Common Core, we are seeing more
movement away from some of the traditional issues. This should be
acknowledged in the literature review and more consistent use of nuanced
language should be made.
In the section on Symbolic Complexity and Meaning, nearly all of the studies
cited are 10 years old or more. This is odd given that algebra is a commonly
researched area of mathematics. Further, the author might want to delve into
the literature on early algebra, which offers new insights into how students
can learn algebra when it is taught without the emphasis on the symbol
system.
I am unconvinced that Table 1 is presented with enough supporting detail for
someone not well-versed in mathematics to make sense of what is important in
the table.
The discussion of the issues with the equal sign appears in two different
sections. It is unclear why these issues are included in both sections. The
author should either justify why the idea that the equal sign indicates an
operation needs to be in two sets of difficulties, or the author needs to
limit the discussion to one of those sections.
It is unclear why the author described four areas of complexity for algebra
given that this study is focused only on the idea of equivalence. Further,
the author did not use the area of complexity introduced earlier in the
paper to situate how equivalence was being conceptualized. Is the hypothesis
that equivalence is a symbolic issue or a structural one? (This goes back to
the earlier comment about why equivalence appeared in both places). In
short, the front end needs to be better connected within itself and the
author needs to better situate the study so that the reader can understand
how the introduction, literature review, and research questions are
influencing each other.
In the methods section, the author does not spend enough time explaining the
idea of the operationalization of cognitive complexity. The author asserts
that the constructivist notions of “meaning and quantity of the symbols
influences the complexity of the task”. How is this constructivist?
Similarly, the author continues noting, “Cognitive complexity is
determined by counting the feature-types present, reflecting the cognitive
constructivist perspective that elements of the expression are formed into
schema or chunks.” Why is this so? Why has the author determined that
these are the features that matter for this analysis? (Much of the confusion
here is likely linked to the underspecification of the research problem and
questions in the first part of the paper.)
Each of the features shown in Table 3 needs to be defined. It is not clear
what a linear symbol, a constant symbol, or a quadratic symbol are.
There is a deep flaw in the quantifying of mathematical objects to determine
the complexity of a situation. This approach does not consider that the
development of number sense and algebraic reasoning actually serve to lower
the cognitive complexity of the expression. For example, for a learner with
weaker understandings of numbers and expressions, 4x+3 may involve 2
constant symbols and one linear symbol. However, if a student has a
meaningful understanding of this symbolic representation, that student
considers 4x as meaning the magnitude of x if made 4 times larger. This
creates a new number for the student that is not made of 2 separate symbols,
but rather a single understood quantity. One could argue that algebra
learners may not have the sophisticated understandings of expressions to
allow them to see 4x as a single quantity, however that is the kind of
teaching and learning of algebra that is valued in the field. So, to assert
that students do not have this kind of reasoning requires data to support
such an assertion. Perhaps less rooted in philosophies of mathematics
learning, the presence of more than one negative sign (particularly in cases
of negative negatives) significantly complicates a problem for most
learners, yet this in analysis, one negative is counted the same as multiple
negatives. Complexity can come from many sources, but simply counting the
number of each kind of symbol seems somewhat superficial and the author has
not presented a solid argument for why this is a valid or reasonable
approach to interpreting the cognitive processes needed for problem solving.
Throughout the paper, the author makes assertions about constructivist
interpretations of cognition. These need to be cited and better grounded as
they represent only one aspect of constructivist interpretations (seemingly
tightly tied to schema theory, which is a narrow view of constructivist
learning). If the author is going to continue to draw on this line of
theory, it seems that the Cognitive Complexity Model section (and, thus, the
literature review) needs to draw from studies of algebraic understanding
that are based in schema theories. The author needs to use this as the basis
for making the assertions about why and how the “features” of this
analysis are appropriate.
The author argues for theory-driven development of analytics and claims that
this work is an example of such an effort. This is a good argument, but the
paper falls short of really explaining the relationship between theory and
the analytics. Similarly, the author describes the results presented as
being promising in terms of showing a potential for student learning.
However, the data presented is not answering a question about teaching and
learning algebra. This seems to be a misalignment between what the paper is
doing (describing this approach to using analytics to measure cognitive
processes) and what the author wants to make claims about (that there is
something promising in the approach used in the classroom for supporting
students’ learning of algebra). This alignment needs to be addressed.
Minor Issues to attend to:
Throughout the paper modeled is spelled “modelled” and labeled is
spelled “labelled”.
E-commerce typically has a hyphen in it. In the paper it is presented as
“ecommerce” which seems strange.
Figure 2 is not referred to in the text therefore it is unclear what it is
showing.
In the section Constraints Parser and Canonical Form the author refers to
Table 1 in the second sentence. That reference should be to Table 2.
Parentheses is spelled two different ways in table 3.
The journal website states that it prefers papers of 2,000-5,000 words.
This paper is over 11,000 words.
-----------------------------------------------------________________________________________________________________________
Journal of Learning Analytics
http://learning-analytics.info