An Eight-Level Tree Structure Implementing Hierarchies of Program

Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
97
An Eight-Level Tree Structure Implementing Hierarchies of
Program Assessment Processes
Iraj Danesh
Department of Mathematics and Computer Science
Alabama State University
Montgomery, AL 36104
334-229-4355
[email protected]
ABSTRACT
This paper, based on the principles of good practice for assessing
student learning [4], develops and implements hierarchies of
program assessment processes in an eight-level tree structure. It
starts with program at the root (level-0), subdividing major
assessment steps into particular measurable steps (top-down
design), continues all the way through several levels of assessment
processes and ends with level-7 (leaves) where tools of
measurements are developed and administered. Bottom-up traversal
of the tree completes progress of data consolidating, affecting
improvements based on the analysis of the results.
Categories and Subject Descriptors
k.3.2 [Computers and Education]: Computer and Information
Science Education- Accreditation, self-assessment
General Terms
Design, measurement
Keywords
Assessment, outcomes, enabling to attain, continuous
improvements,
rubrics,
collecting,
sampling,
mapping,
consolidating, analyzing, documenting, reporting
1. INTRODUCTION
Accrediting agencies such as SACS, NCACS, and EASC grant
status of public recognition to the programs that meet the agencies’
standards, criteria, and requirements. ABET criteria [1] such as
Program Educational Objectives (PEOs), Student Outcomes (SOs),
Continuous Improvement (CI) as well as data collecting and
consolidating strategies, organizational structures (committees),
methods, means and tools of assessments including the role of
categories of Blooms’ Taxonomy domains in the development of
high level rubrics are discussed as guidelines for program
assessment, self-study and accreditation purposes.
2. STRUCTURE AND LEVELS OF
ASSESSMENT TREE
Figure 1 is an exemplar succinct tree structure, developed on the
assumption of having five PEOs, nine SOs (ABET SOs ‘a’ through
‘i’), six performance indicators of student outcomes (PISOs) and
three courses for each SO, ten course outcome indicators (COIs)
per course, as well as necessary means and tools of assessment. It
includes eight levels of assessment processes providing an overall
view of the program assessment tree for improvement continuity
and attainment of SOs. Each level is associated with processes that
accumulate specific data, exchanging feedback between layers
above and below it. The role of PEO, SO, and course CI cycles
(depicted in figure 3) are realized in leve1s 1, 2, and 4.
2.1 PEO-Level (level 1)
“PEOs are broad statements that describe what graduates are
expected to attain within a few years after graduation” [2,
14].They serve as targets for career development (e.g. become
CEOs and entrepreneurs.). To maintain continuity and attainment
of PEOs, based upon the input from program constituencies, ABET
and ACM/IEEE-CS recommendations, PEOs are reviewed and
changes are adopted periodically (known as Slow-Cycle with a
frequency of six years) by the CSC curriculum Committee. Figure
4 provides an exemplar excerpt of involvement of various
constituencies and consulting entities in PEO development.
Evaluation of achievements and attainment of PEOs are no longer
required by ABET. However PEOs should remain affirmed and
consistent with institution mission and constituencies’ needs.
2.2 SO-Level (Level 2)
“SOs indicate the ability of students to use the knowledge they
gained at the time of graduation”[6] (e.g. students are able to
analyze computing algorithms). Default ABET/CAC recommended
SOs ‘a’ through ‘i’ (referred to as “characteristics”) can be found
98
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
in reference [6]. They prepare graduates to attain PEOs and are
affirmed, assessed, periodically reviewed (every three years –
through CI SO-Cycle), and changes are adopted. CI processes
demonstrate how students are enabled to attain all the
characteristics. Spreading of SO data collection over alternative
years (table 1) is acceptable and even desirable.
2.3 PISO-Level (level 3)
PISOs assess SOs and are similar to leading economic indicators
[3]. Every ABET SO is broken down into six simple statements
called PISOs (figure 1), that are measurable aspects that allow one
to determine the extent to which the outcome is met. Enabling
characteristics does not mean goals are necessarily met. Success
targets for achievement of PISOs are established and measured at
this level (e.g.: At least 80% of students demonstrated excellent or
good performance, and at least 90% demonstrated acceptable targets are not achieved if improvement is needed). PISOs may
constitute dimension components of an analytic rubric [3] with up
to five levels of performance (scale). To assure that different
instructors at different times characterize student performance
consistently, a holistic rubric associated with performance indicator
may be developed. Holistic rubrics are best fitted for PISOs data
collection bringing uniformity, consistency, and play a significant
role in faculty bias scoring. A sample holistic rubric for PISOs can
be found in reference [11].
2.4 Course-Level (Level 4)
This level provides Fast-Level Cycle with a frequency of one
semester or a quarter for continuous course improvement strategies,
examines role of courses (through which the relevant skills,
knowledge, and behavior are acquired), and develops course
outcomes (COs) and COIs. It provides course displays
demonstrating curriculum properly enables and attains all the
characteristics. It also provides evaluation suggesting method of
course improvements and their implementations.
2.5 COI-Level (level 5)
COIs are established to measure COs (like PISOs measuring SOs),
documenting the role of tests, projects, and assignments in
assessment of PISOs, and SOs. They may be measured through
levels of concept, topic or subject in courses and are ultimate
instruments for concrete measurements of PISOs in classrooms.
2.6 Means-Level (level 6)
For each course, appropriate areas of measurement or means (tests,
assignments, participant observation, oral and written presentations,
group project, capstone project, etc.) are designed to assess COIs.
Most projects rely heavily on team projects and in-class teamwork.
Functioning within a team is in harmony with ABET SO “d”. Team
coherence and techniques for achieving such coherence must be
explicitly assessed using team rubrics.
2.7 Tool-Level (level 7)
Appropriate use of different tools (rubrics, faculty panel, item
analysis, and percentiles, etc.), different types of rubrics (holistic or
global, analytic, weighted, etc.), components of an analytic rubric
(dimension or performance indicator, scale or level of performance,
descriptor or expected result), including attributes of dimension
(content referent, action verb, value free) generates actionable data
for analysis and evaluation, affecting improvement and providing
feedback. Variety of sample illustrative rubrics can be found in
reference [7].
3. SELECTING AND SAMPLING
Good assessment demands good compromises. Using too many of
every available instrument (assessment methods, rubrics, data,
courses, and students), may generate extensive raw data with little
information. Not all large multiple sections of courses,
methodologies (with their own advantages, disadvantages, and
caveats), means and tools can be used for a single assessment.
Appropriate selection of instruments, limiting total number of
courses to six (3 for each PO - e.g.), and sampling of students
representing all students of all point averages avoids ambiguity,
resolves caveats concerning methodologies and number of
instruments, and eventually reduces the workload.
4. MAPPING AND ALIGNMENT
Exemplar excerpts of mapping and alignment among PEOs, SOs,
PISOs, courses, COIs, assessment means and tools are provided in
graphical and tabular forms (figures 1 & 2 – table3). Figure 2
provides Bottom-up traversal view of involved entities in the tree
structure showing their order of precedence. Table 2 provides an
exemplar excerpt of rubric topics that are mapped with course
topics and means of assessment.
5. REPORTING AND IMPROVEMENT
EAMU succinct performance vector (PV) [8], and Four-Column
Template [10] are favorite choices for both assessing and reporting
(data presented here are for illustrative purposes only and are not
actual). EAMU is the acronym for Excellent-Adequate-MinimalUnsatisfactory. “EAMU” PV transforms data collected from direct
assessment into succinct vectors of information. Table 4 shows
“EAMU” PV table for an annual report of a PISO assessment for
ABET SO ‘i’ (e.g.). “EAMU” PV for courses indicates that PISO
is of concern. The expected success targets for courses I and II
were met, but not for course III, implying content of course III
needs to be modified and improved. The number of students in
courses I, II, and III are 17, 12, and 9 respectively. Course I
assessment results (e.g.) may be reported as: EAMU vector (8,
1, 7, 1), meaning out of 17 students, there are 8 excellent, 1
adequate, 7 minimal and 1 unsatisfactory.
Based on the Nichols Five-Column Assessment Model [13], a
modified Four-Column Template (table 5) is designed to
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
99
Program
PEO #5
PEO #3: Develop the level of professional competence
and technical proficiency for practice of computer science
i
PEO #4
Feedback
SO “b”
SO “a”: An ability to apply knowledge of computing
and mathematics appropriate to the discipline …
SO “d”
PEO #2
PEO #1
SO “i”
SO “c”
PISO #4
PIO #6
Feedback
PISO #1
PISO #3: Demonstrate an understanding of
computer organization and architecture
PISO #2
Feedback
CSC 330
Feedback
COI #1
COI #2
CSC 311: Computer organization
and Architecture
CSC 312
COI #10
COI #4: Demonstrate the process
of building of a data path …
Feedback
Assessment means: Test, assignment, project, etc.
Feedback
Assessment tools: Rubrics, percentiles, etc.
Figure 1. An exemplar excerpt block diagram that provides mapping, alignment, feedback exchanges
and hierarchies of program assessment in a tree structure
100
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
Table 1. Spreading of SO data collection over alternative years (every three years)
SOs
‘a’
‘b’
‘c’
Year1
Data collected
‘d’
Data collected
Year2
Year3
Year4
Data collected
Data collected
Year5
Year6
Data collected
Date collected
….
Data collected
Data collected
……
........
Table 2. An exemplar excerpt of rubric topics that are mapped with course topics and means
Tool (rubric) topic
Implementation in a high-level
language
…….
Data representation and design of
algorithms
Means
Tools
Course topic
Manipulation of data structures,
recursion, etc.
……..
Stacks, queues, linked lists, binary
trees, etc.
COIs & COs
Means: Assignment, test, and project
Test #1, question 5,6; programming
assignments 2, 3
………
Test #2, question 4, project 3
Courses
PISOs
POs
PEOs
Figure 2. A bottom-up traversal view of involved entities in tree structure for program assessment
Institution
Assessment
committees
Constituencies
PEOLevel
SO-Level
cycle
CourseLevel
Figure 3. An overall view of continuous improvement cycles in program assessment block diagram
Program advisory council
CSC Program faculty
Employers of graduates and alumni
CSC Program coordinator
CSC curriculum committee
CSC faculty & dept. chair & student
representative (approval committee)
ABET and current ACM/IEEECS recommendations (2013)
Graduates within a few years after
graduation
Figure 4.An exemplar excerpt of involvement of various constituencies and counselling entities in developing PEOs
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
101
Table 3. An exemplar excerpt of PEOs that are mapped with SOs, PISOs and related courses
Program educational
objective (PEOs)
Student outcomes (SOs)
Performance indicators of
student outcomes (PISOs)
Content courses
(a) An ability to apply
knowledge of computing
and
mathematics
appropriate to the discipline
PISO #1:Demonstrate an
understanding of computer
organization and
architecture
PISO #2: ………………..
……………………..
PISO #6: Demonstrate an
CSC 211, CSC 380, CSC
421, CSC 212, CSC 280,
CSC 447
1-Develop the level of
professional competence
and technical proficiency
for practice of computer
science
understanding of data
structures and algorithm
analysis
____________________
(b)An ability to analyze a
problem, and identify and
define the computing
requirements appropriate to
its solution
Continue………
………………..
……………………..
…………………….
Continue…………
…………………….
…………………..
………………………
Continue……..
Continue…….
Table 5. An exemplar Four–Column template report of expected student outcomes
Expected Outcomes
Predefined success targets
80% of students
demonstrate excellent
performance, and 90%
demonstrate acceptable
Actual performance
achieved
70% of students
demonstrated excellent
performance, and 75%
demonstrated acceptable
Affecting improvement/
action taken
Goal was not met. Students
have difficulty with
polymorphism that was
addressed (e.g.)
ABET Student Outcome (a)
An ability to apply knowledge
of computing and
mathematics appropriate to
the discipline
ABET student outcome (d)
An ability to function
effectively on teams to
accomplish a common goal
80% of students
demonstrate excellent
performance, and 90%
demonstrate acceptable
90% of students
demonstrated excellent
performance, and 90%
demonstrated acceptable
Goal was met. No action
necessary.
Table 6. An exemplar excerpt of assessment report of PISOs for SO “a”
PISOs
Courses
assessed
Unsatisfactory
(head count)
Minimal
(head
count)
Adequate
(head count)
Excellent
(head count)
Total
PISO #1
………
PISO #6
CSC 311
………
CSC 421
2
……..
1
1
……….
2
3
…….
3
8
……..
8
14
…..
14
ABET student
outcome
Table 7. An exemplar excerpt of consolidated assessment report summary for ABET SOs
Below expectation
Meets expectations
Above expectations Total # of students
(head count)
(head count)
(head count)
(head count)
SO “a”
0
6
8
14
…….
SO “i”
………..
9
…………….
4
………
1
……
14
102
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 |
Table 4. A PISO PV table for ABET SO “i”
Name
I-Procedural Programming
II-Software Development
III-Object oriented programming
U
1
1
1
M
7
0
0
A
1
1
5
E
8
10
3
incorporate processes of reporting as identified expected outcomes,
predefined success targets, actual performance achieved, and
affecting improvements (action taken) based on the analysis of the
results.
6. CONSOLIDATE REPORTING
The results of PISO level is transferred into Table 6, providing an
excerpt of head count rates of six PISOs for SO “a”. For PISO #1,
out of 14 students, there were 8, 3, 1, and 2 students in excellent,
adequate, minimal, and unsatisfactory categories respectively. At
SO level all information is consolidated in table 7, reporting
cumulative head count rates for SOs annual report summary. It
indicates that for SO “a” students are either meeting or exceeding
expectations. For SO “i”, nine students performed below
expectations. Corrective measures for further improvement and
sustainability were devised (addition of a unit on the solution of
recurrence equations with expansion of recurrence relations).
7. ENABLING, ATTAINING, AND
DOCUMENTING
ABET requires programs enable all graduates to attain all
characteristics. Course displays including syllabi, exams, samples
of student work (table 2), minutes of meetings, etc. demonstrate
how curriculum enables all characteristics for all students. It is
expected that mission of institution, PEOs, SOs be documented,
published and visible to public (location includes web sites,
catalog, etc.)
8. ASSESSMENT AND BLOOMS’
TAXONOMY
Blooms’ Taxonomy refers to a classification of the different
objectives set for student learning by educators. It divides
educational objectives into three “domains” (affective,
psychomotor, and cognitive) [5]. Receiving, responding, valuing,
organization, and characterization by a value are categories of
affective domain. Perception, set, guided response, mechanism,
complex or overt response, adaption, and origination are categories
of Psychomotor.
Knowledge, comprehension, application,
analysis, synthesis, and evaluation are categories of cognitive
domain [9, 12]. These categories may be used in the development
of high level rubrics either as dimension (performance indicators)
or as scale (level of performance). The higher the cognitive level,
the more difficult it is to achieve targets. Thresholds and success
targets might be lowered at high cognitive level.
9. CONCLUDING REMARKS
Simplicity favors regularity. Selection of small number of PISOs,
appropriate methods and instruments, limited number of relevant
courses and randomly sampled students (representing all), foster
quicker improvements, and conforms to the philosophy of keeping
assessment simple. The following agendas [4]:
x
Learning the materials most valued to students and
constituencies (educational values)
x
Learning as multidimensional, integrated, and revealed in
performance overtime,
x
Keeping assessment continual and cumulative (not
episodic)
x
Meeting responsibility to students and stakeholders.
Serve as excellent vehicles for wider improvement and pedagogical
enhancements. They partially constitute the fundamentals of good
practice for assessing student outcomes (formerly program
outcomes), and should be treated as such during assessment
processes.
10. REFERENCES
[1] ABET (Accreditation Board of Engineering and Technology)
Program Assessment Workshop, 2012 ABET Symposium,
April 2012, St. Louis, MO.
[2] ABET 2011 definition of PEOs
[3] ABET Program Assessment Workshop, 2012 ABET
Symposium, April 2012, St. Louis, MO.
[4] Astin Alexander W., Banta W. Trudy, “Principles of Good
Practice for Assessing Student Learning” 210 ABET Faculty
Workshop handbook, appendix B, page 2-3
[5] Blooms’ Taxonomy, http://en.wikipedia.org/wiki/Blooms’
Taxonomy, accessed on 7/4/213
[6] Criteria for Accrediting Computing Programs, 2012 – 2013,
http:/www.abet.org/computing-criteria, 2012-2013, accessed
on 7/11/2012
[7] Danesh Iraj “A General Course-Level Assessment Cycle for
Computing Courses” Proceedings of the 2013 International
Conference on Frontiers in Education: Computer Science and
Computer Engineering, Las Vega, July 22-25, 2013, p. 48-54
[8] Estel John, A Heuristic Approach to Assessing Student
Outcomes Using Performance Vectors, ABET Symposium, St.
Louis, MO, April 19-21, 2012
[9] Gronlund N. E.”Measurement and Evaluation in Teaching”
New York: 4th ed., Macmillan Publishing, 1981
[10] Jones Lisa, “Using the Four-Column Model to assess a
Program” ABET Symposium, St. Louis, MO, April 19-21,
2012
[11] Lakshmanan K. B. “Assessing Student Learning in Computer
Science – A Case Study” Proceeding of the 2013 International
Conference on Frontiers in Education: Computer Science and
Computer Engineering, Las Vegas Nevada, July 22-25, 2013,
p. 3-9
[12] McBeath, R. J.”Instruction and Evaluating in Higher
Education: A Guidebook for Planning Learning Outcomes”,
NJ: Englewood Cliffs Educational Technology, 1992
[13] Nichols James O, “A Road Map for Improvement of Student
Learning and Support Services Through Assessment, New
York: Agathon Press, 2005.
[14] Rogers Gloria, Faculty Workshop on sustainable Assessment
processes, Annual ABET conference, Baltimore, Maryland,
Oct 26, 2010.