documentation - University of Leeds

TEAMWORK: Involving secondary school students in the design of multimedia
distance learning curriculum material in collaboration with a City Learning Centre.
By Philip Duggan
Paper presented at the Annual Conference of the British Educational Research
Association, University of Exeter, England, 12-14 September 2002
ABSTRACT
In this pilot study 14 year old students were involved in the design and implementation of
distance learning multimedia curriculum material for History, Mathematics and Science in
collaboration with a local CLC.
The involvement of the CLC enhanced the students feeling of belonging to a 'real' project and
altered the usual dynamics between teachers and students. In addition, the CLC offered
technical platforms and expertise unavailable in the normal school environment.
The students initially self selected themselves into the available roles and towards the end of
the pilot many students were more willing to attempt the more technically difficult
implementation tasks.
The students played an active role in determining the choice of multimedia authoring toolkit
to be used and expressed a marked preference for commercial level software. Students proved
to be adept at mastering the concepts underpinning the chosen toolkit and the more
straightforward implementation procedures. They were less successful in determining a
suitable method of documenting their design and found difficulty adapting traditional
diagrammatical techniques to a more interactive environment.
At the end of the study the students proved to be adept at designing procedures to evaluate the
curriculum material and critiquing the methodology developed during the pilot study.
INTRODUCTION
This paper concerns a collaborative pilot study between the Mosslands School, a boys 11 to
18 comprehensive school in the Wirral and the Learning Lighthouse City Learning Centre,
also based in the Wirral. The aim of the study was to examine if school students could have a
useful role in the design of curriculum software and also to investigate whether access to a
CLC would have any tangible benefits for the school. For the purposes of illustration we will
be concentrating on software produced for Key Stage 3 Science.
THE CITY LEARNING CENTRE
One of the key elements of the Excellence in Cities (EiC) programme is the establishment of
a network of school-based City Learning Centres (CLCs). These were to provide state-of-the
art ICT-based learning opportunities for the pupils at the host school, for pupils at a network
of surrounding schools and for the wider community.
During the design and implementation stage of the CLC informal discussions with staff and
students revealed not so much a poverty of resource as a poverty of expectation. When asked
what software/hardware platforms they required most staff had little idea of the capabilities of
the equipment and therefore little idea of how it could be used to enhance the curriculum.
Typical project requests often revolved around extensions to projects already being run in
schools. The students were often more aware of the shortcomings of the school platforms and
often displayed more imagination in assessing potential development.
PARTNERSHIP BETWEEN THE CITY LEARNING CENTRE AND THE SCHOOL
The fundamental question boiled down to ‘what can we do for ourselves?’ and ‘what do we
need the CLC to do?’ Assessing the ‘design status’ of the school proved to be a crucial first
step in allocating work between the school and the CLC. The design status was defined as the
capability of the school to achieve the requirements of the design. Assessing this status
involved auditing the ICT capability of the participating staff, the students and the capabilities
of the school ICT platforms. This was led by the requirements of the design process. The
allocation of work between the school and the CLC was school led and the staff and students
decided what work could be completed in school and what work required the CLC on an
ongoing basis.
The school was allowed to select the software and the number of copies it required in the
CLC centre. Funds were also made available for limited copes of all of the software to be
made available in school. In choosing the software we were constrained to select software
which would have use beyond the scope of the project. It was important that other schools and
institutions would be able to utilise the software for their own purposes. It was decided that
educational versions of all of the software would be purchased for general use. However, it
was also decided that one full version of each piece of software would be purchased so that
students would have access to the full functionality of the software rather than the limited
functionality so often found in educational versions. In addition, purchasing full versions of
software enabled us to legally distribute our products to other institutions.
Funds were made available by the CLC to pay for supply cover for three school staff for one
afternoon per week over a period of up to twelve weeks. Students were released from their
normal lessons by negotiation with their other teachers.
THE CONTRACT
The expectations of the CLC were:
1. All products of the collaboration should be made available to partner schools.
2. Training material and techniques should be disseminated to staff of the CLC and
partnership schools.
3. Products should be made available for demonstration by students to interested
external parties such as the local education authority, advisory teachers and inspectors
and potential sponsors.
The expectations of the school were that:
1. The required expertise would either be available, or made available, in the CLC.
2. Access to the centre would be made available outside of normal school hours as
appropriate.
3. The students learning would be based around a constructivist and child centred
approach as appropriate.
REVIEW OF CHILDRENS ROLES IN DESIGNING SOFTWARE
There have been many studies designed to categorise the variety of ways that children can
assist in the design of software. Druin’s theoretical framework (Druin 2002, Druin 1999)
proved invaluable in educating students and teachers in possible methods of including
students in the design process. Druin categorises student involvement in software design into
four main strands: users, testers, native informants and design partners. Based on this
framework let us briefly examine each of these roles and identify some of the more significant
studies.
USERS
The user role was the earliest form of collaboration with the intention of testing general
concepts and better understand the learning process. Some of the most common methods of
data collection are activity observation, user impressions, technological impact and work
result analysis.
Assessing the user role
The user role has many strengths. The researcher is in control and it is easy to include
children into the development process so that research can be accomplished quickly. There
are already a large number of comprehensive studies already completed in this area. The
weaknesses of the user role reflect the essentially passive nature of the role itself. The user
has less direct impact on technological changes and little impact in the design process.
TESTERS
The role of the tester was primarily developed to determine if developing technology meets
its design goals and to shape new technologies before release.
Emphasis is placed on the identification of possible areas of confusion, likes and dislikes, the
effect of technology on learning outcomes and possible bugs. Potential problem areas of the
product are usually slated for a heavier testing schedule (Stronmen, 1998).
Assessing the tester role
The strength of the tester role is that it begins to empower children and the process of
changing technology is quicker than that of the user role. It is also flexible in that a lot of
testing can be performed outside of school. However, children don’t have an input until later
in the design process and the adults tend to make all of the critical decisions. The volume of
generated data may also constrain the number of testers used.
INFORMANTS
The role of the informant was designed to raise new issues, confirm assumptions and set
developmental directions. Scaife et al. (Scaife et al, 1997; Scaife & Rogers, 1999) were first
to describe the concept of children acting as “informant designers”. In this role the precept is
to question when children should take part in the design process and make a determination of
any useful design contributions they could make.
Children can act as informants in numerous ways. What differs from the user and tester role is
when these interactions happen and how directly they can affect the design of new
technology. Children can inform the design process at any time the design team believes it
needs direction or support.
Low-tech materials, interviews, design feedback on prototypes, can all be used continually as
methods for informants as long as the materials and methods are ‘age appropriate’. Many of
these methods are similar to techniques used when children are in the role of user or tester.
What differs is when and how often these techniques are used during the design process.
Children should be used as informants depending on the requirements of the specific project.
Assessing the informant role
The informant role empowers children and brings their input into the start of the development.
Children and adults work together in a flexible manner that impacts on how technology is
shaped and developed. Children are challenged in a positive way throughout the process.
However, the decision of when to include the children is still taken by adults and the research
flexibility may be difficult to structure. Working with children inevitably takes more time and
their suggestions may be unworkable or conflict with the pedagogical goals of the design.
Design partners
Druin (Druin, 2002) defines the techniques used in design partnership as a combination of
three main strands: the co-operative design of Scandinavia (Greenbaum & Kyng, 1991), the
participatory design of the United States (Schuler & Namioka, 1993), and the consensus
participation of England (Mumford & Henshall, 1979). The essential elements of using
children as design partners are based on an attempt to simulate the complexity of an
environment where tasks may not be performed in a sequential manner or even completed by
only one person. The aim is to encourage children to construct their own paths to knowledge,
and to deploy computer tools in a way that can support children as builders, designers, and
researchers.
Druin (Druin, 2002) asserts that combination of observation, low-tech prototyping, and timeintensive technology use can lead to the development of new technologies. This methodology
is termed cooperative enquiry. Activity patterns and roles can suggest new design directions.
Artifact analysis on low-tech prototypes can suggest new technology features while
technology immersion can lead to revision and eventual products. Jones & Balka (Jones &
Balka, 1998) and Steyn (Steyn, 2001) extend this metaphor to encourage students to utilise
high tech building blocks and participate in the process of implementation.
Assessing the design partner role
The great strength of using children as design partners reflects the fact that children are
empowered and the learning process can cause both children and adults to change. Feedback
is instant throughout the design process which informs new design directions and required
product revisions. New technology features are easily suggested through the children’s
experience of design centred learning. However, team decisions must be negotiated and this
can take considerable time. The school environment is difficult to work within and finding
personnel who can work with children may be difficult.
BUILDING THE INFRASTRUCTURE
Despite the weaknesses of the design partner approach it was decided to attempt to adapt it to
work within the partnership. Let’s look at the how the infrastructure to support this approach
was assembled.
Selecting School Staff
Three teachers were selected on the basis of interest rather than expertise. Students were
asked which teachers they would like to work with. In all cases the selected teachers were
those identified by the students as teachers who listened, discussed problems with students
and, perhaps most importantly, were not afraid to accept help from the students themselves.
Selecting students
It was an important feature of the pilot project that participation was not limited to those
students regarded as ‘gifted’ or ‘talented’. Anecdotal evidence from teacher observation
suggests that the ‘more able’ students cope with poorly designed software more easily than
students with less ability. This naturally leads to the assumption that the more able students
might not notice software design features which would cause difficulty or confusion to less
able students. Therefore, students were invited to apply for the team from the whole ability
spectrum of the year, ranging from the gifted and talented stream to those students
‘statemented’ for special educational needs. It was not regarded as important that the ability
range of the participating students constituted an accurate cross section of the ability range of
the year, but it was vital that all ability ranges were represented. In the event the applications
received from students from each section of the ability range proved to be almost equal in
number.
Deciding the student roles
One important decision was deciding upon the roles that the students would take in the design
process. Careful consideration was given to the appropriate role, or combination of roles, that
could be adopted within the available time scale. As previously stated, the more involved
students become in the design process the longer the development cycle. Common sense
decreed that students should be limited to using or testing software in order to fit within the
available time. However, it was decided that this approach would be too deterministic and
may lead to the students being artificially constrained in the roles they wished to take. In the
event it was decided to experiment and let the students decide upon the nature of the role(s)
and the degree to which they would like to participate. Students and staff were informed from
the beginning of the project that they were to regard each other as full equals in the project.
Team selection, roles and support
Students were initially asked to take responsibility for a task from the following list: Design,
implementation, capture, audio/video production, documentation and training. Students self
selected their own roles according to their own perceptions of their strengths and weaknesses
although, these student perceptions and the roles themselves underwent radical
transformation. In all cases the initial level of the role was that of a user or tester. As the
students gained confidence, several of them began to act as informants and even design
partners.
Information on students in the design team with special needs was accessed from the special
needs register and utilised to brief learning mentors as to the support individual students may
need. However, the responsibility of requesting support was given to the students. This was
seen as an inclusive strategy for the team to adopt.
As reported by Druin (Druin, 2002) and Jones & Balka (Jones & Balka, 1998) there were
tensions in the perceived relations between the students and the teachers. Students initially
were waiting for instruction and guidance from teachers. However, they soon realised that the
teachers involved in the project were not ICT experts by their own admission. As the project
progressed the students adapted to the change in the power structure more easily than did the
teachers. Often teachers had to be actively discouraged from attempting to direct the sessions.
Toolkit selection
By increasing our access to suitable technology partnership with the CLC meant that the
financial and technical constraints normally imposed by working within a school environment
did not apply. In this case we were able to let the required functionality of the system indicate
the possible range of tools to be used in construction rather than attempt to determine “what
can we do with the tools that we have got?”
Initial training for students in basic implementation using Toolbook, Flash and Director was
already underway through the multimedia club in school. Staff from the CLC also attended
several of these training sessions. Students were trained using paper based and CBT material
adapted from introductory training initially written for graduate and post graduate students. In
its adapted form, it proved to be eminently suitable at the secondary school level. Students
became familiar with use of tools, wizards and behaviours but not familiar with OOP side of
the implementation. Students were also familiar with audio and video capture, production and
editing. The broad parameters of the software design were outlined to the students and a
meeting was held for the students to decide on the most suitable development platform(s) to
be adopted. Interestingly, the both the school and CLC staff involved in the project had little
or no experience in the available authoring platforms and several students took on the task of
training the staff and each other.
Jones and Balka (Jones and Balka, 1998) chose a Java based authoring suite while Steyn
(Steyn, 2001) encouraged some students to author in Delphi. However, after consultation with
students and teachers it was decided that Toolbook offered an authoring platform that
combined the ability to create building blocks which operated in a similar way to JAVA beans
in an authoring environment that was easy to use and supported multimedia elements. The
functionality of the project was designed using a set of interdependent and contextindependent building blocks coded in ToolBook’s own OOP language ‘OpenScript’. The
building blocks were designed as general purpose utilities that were to be reusable. Building
blocks were created in a standardised format that generated dialogue boxes which asked
students to enter the input data as parameters. It was deemed to be a good idea that the output
data from the building blocks was displayed in a dialogue box to the user as a useful form of
feedback. The students designed a basic template and navigational structure which was
revisited on several occasions in the light of oral feedback sessions. The building blocks were
initially provided by the teachers although the students were beginning to create and code
their own building blocks thereby becoming active designers rather than passive users.
Students fed back using ‘open ended’ questionnaires and oral feedback sessions compiled by
the teachers. The most useful feedback came during sessions where students fed back to each
other either individually or in small focus groups. Students felt that in feeding back to each
other they were not being judged on their performance. Also they were able to use their own
slang and jargon in a more uninhibited manner. Teachers were instructed to resist the
temptation to guide or concentrate the feedback in any specific direction as the lateral
thinking of the students generated some very useful insights. Many of the responses were
pedagogically related and demonstrated a critical awareness of the constraints and limitations
of the developmental platform. Indeed, students were often particularly reflective about the
whole methodology of our approach and often made valuable contributions to its
development.
DOCUMENTATION
In a similar manner to Jones & Balka (Jones & Balka, 1998) it was decided that students
needed the ability to replicate the metaphor of the design template and building blocks in a
non-functional form. Traditional methods of representing software design such as flow charts
and structure diagrams proved to generate inadequate metaphors of the designs. Attempts to
design an original representational system initially generated some promising results.
However, time constraints and the difficulty involved in designing a system to represent the
more interactive elements of the design proved to be beyond the capabilities of the majority of
the students. The ability to visualise information flow and functionality was regarded as a
vital component in inculcating programming expertise in students. Therefore, rather than
design a new system it was decided to adopt the SimCHET (an adaptation of PICTIVE)
system as devised by Jones & Balka (Jones & Balka, 1998). This gave the students the ability
to represent building blocks in an iconographic fashion to indicate general functionality,
specify visible connections, specify parameters and functionality.
Time constraints meant that for the majority of students the purpose of the SimCHET
prototype was not so much an aid to design but a way to document the original system in a
useful manner and document the changes in system structure and content. Some students
extended the use of the prototype to document building blocks and functions which were
more aspirational in nature and possibly very difficult to implement. It was interesting to
observe that students who were prepared to design on the documentation system were more
likely to produce new and innovative design ideas. It was also observed that students taking
on the burden of implementing the design tended to adapt their designs towards what the
limitations of the software rather than what they thought may be good design.
THE DESIGN FRAMEWORK
The short time scale of the project meant that it was not feasible to design a large piece of
software from scratch. Instead, an initial prototype produced by teachers was redesigned by
the students to make it more usable. This approach had the following advantage in that the
curriculum material and pedagogical information were already identified for the students to
examine. Right from the start it was decided that students should be able to alter the whole
structure, navigation, appearance and functionality within an agreed design framework
negotiated with the staff. They were also allowed to alter the curriculum content of the
software subject to teacher approval that the required material covered the essential teaching
points. For convenience, it was decided that the majority of the design and routine
implementation would take place within the school. Implementation of the more advanced
features of the software together with audio/visual capture and production would take place
within the CLC.
Defining the curriculum content of the initial design
The curriculum content was decided by staff making decisions on the teaching points to be
made on each topic, the order in which they were to be tackled and the overall structure of the
materials. This process was codified by the staff as follows:
1. Drawing diagrams to map the initial ideas.
2. Analysing objectives and determining activities to meet those objectives.
3. Listing teaching points.
4. Keeping an open mind.
The use of ‘mind mapping’ software proved invaluable during this process and proved to be
very useful in explaining the overall aims and objectives of the software to the students.
Guidelines for media selection
General guidelines for the effective selection of media have already been formulated in many
publications. For example, Rowntree [Rowntree, 1994] recommends that any design which
seeks to combine an effective composite of text, graphics, animation, audio and visual
elements should ask the following questions:
1. Do any learning objectives dictate certain media and which media will be physically
available to the learners?
2. Which media will be the most convenient for the learners to use and are any media
likely to be particularly motivating?
3. Is there any pressure to use/avoid certain media?
4. Which media will the teacher be most comfortable with, which media will the
learners and teachers already have the necessary skills to use?
5. Which media is affordable and can any alternate media be used to ensure variety?
But how could these guidelines be translated into specific recommendations about the correct
selection of media elements to enhance the desired learning outcomes?
In discussion with both students and staff it proved to be relatively easy to codify a design
grid to guide the students to make informed choices about the elements they would like to use
in order to meet the aims of the design. See Figure 1. However, although the guidelines were
not to be regarded as an absolute requirement the students were able to question the staff
about the ‘intention’ of various parts of the prototype software and select the appropriate
media elements accordingly.
During the discussion several of the students raised the issue of how the interface could be
designed to adapt to both novice and expert users. Reasoning that the students who would use
the software represented a wide range of computer literacy it was decided to enhance the
‘learnability’ of the software so that novice users reach a reasonable level of usage
proficiency within a short time [Wu, 2000].
Figure 1.
Intention
Subject analysis
Interactive Computer Computer
Print Audio Video video
tutoring
simulation Multi-media
Y
Y
Convey subject
Spirit
Y
Y
Y
Y
Y
Include learners
ideas into teaching
Y
Answer questions
about subject
Y
Y
Y
Y
Y
Y
Physically try
things out
Y
Physical feedback
from real world
Y
Standardized verbal
feedback according to
category of response made
Y
Y
Y
Y
Y
Therefore, during the initial design phase of the software students were guided to include the
following access devices designed to assist users to navigate their way around unfamiliar
material [Rowntree, 1994] as appropriate.
At the beginning of the software, or each unit of the software, students were guided to
include:
1.
2.
3.
4.
5.
6.
7.
Explanatory title.
Contents list.
Route map of package or unit.
Introduction/overview.
Links with other materials.
List of objectives.
Guidance on how to use the material.
In the main body of the software, or each unit of the software, students were guided to
include:
1.
2.
3.
4.
5.
6.
Headings.
Numbering systems.
Instructions about what to do next.
Verbal signposts.
Graphic signals.
Summaries of each section.
At the end of the software, or each unit of the software, students were guided to include:
1. Glossary
2. Post-test
3. Index
Students were also encouraged to include ‘accelerators’ [Wu, 2000] such as short cuts and
function keys to enable the users to perform frequent tasks quickly. Again, the inclusion of
these elements were not intended to be absolute requirements but were left to the students
judgement to include as they felt it necessary.
IMPLEMENTATION
The implementation largely proceeded as envisaged with routine development and production
taking place within the school and the implementation of advanced features and media
production taking place within the CLC. Final assemble of the various components of the
software was initially undertaken at the CLC and completed at school. However, there were
some problems. The progress of the project was mitigated by the timing of the
implementation. The project was a pilot study for the CLC and was conducted while the CLC
was still being set up. As such, problems were caused by protocols and procedures not being
bedded in, access levels and data sharing between students not being properly thought out and
compatibility problems between software causing systems to crash on a regular basis. In
addition, access to CLC staff with the requisite technical skills often proved to be a problem.
Clearly, the majority of personnel with industrial level skills still work in industry. Time
constraints meant that it often proved to be expedient for the school staff to develop the
required skills themselves. Of course, the acquisition of these skills ultimately proved to be of
some benefit to the staff but it did necessitate the timing of several aspects of the development
cycle to be altered. Students were often keen to stay beyond the allotted session and they also
did a considerable amount of development as part of their extra curricular activities.
ANALYSIS
There are many methods of determining the effectiveness of the collaborative design process.
For example:
Activity observation data can be collected in a wide variety of formats. These formats
include: patterns of activity, and general user concern (Burov, 1991; Nicol, 1988); live
observations by hidden observers or captured data for later analysis (e.g., Goldman-Segall,
1998; Plowman et al., 1999); talk, movement, gesture, and machine interaction (Plowman et
al., 1999, p. 314).
Researchers can themselves participate in the gathering of data by taking part in the activities
of the classroom (e.g., Plowman, 1992) as can teachers (e.g., Koenemann et al., 1999).
Qualitative interviews and surveys can be used to collect user impressions about their likes,
dislikes, interests and difficulties. Interviews can be informal (e.g., Jackson et al., 1998) or
more formal featuring numerical questions (Salzman et al., 1999).
The impact that technology has on a child’s learning can be determined by tests given to
children before and after the use of technology. Quantifiable tests are suitable for evaluating
subject matter knowledge (Salzman et al., 1999) while qualitative descriptions of children as
technology users can be obtained by observing children over an extended period of time (e.g.,
Blomberg et al., 1993). Data collection can be done by asking children, teachers and
observers to write their thoughts in journals.
Various methodologies exist for analysing the data and very much depends upon the nature of
the information being sought. Examples of this include measuring the speed of task
completion (Fell et al., 1994) and the quantity of content questions answered by a child after
using technology (Salzman et al., 1999).
However, all of these techniques are labour intensive and beyond the time and requirements
of the pilot study. Clearly another approach to evaluating the software was needed. During an
Internet trawl the students themselves identified the following method of assessing the
usability of the software.
Dix et al [Dix et al, 1997] states that the principle of usability can be divided into three main
categories:
Learnability; the ease with which new users can begin effective interaction achieve
maximal performance.
Flexibility; the multiplicity of ways the user and the system can exchange information.
Robustness; the level of support provided to the user in determining successful
achievement and assessment of goals.
The ultimate test of a product’s usability is based on measurements of the users experience
with it.
In an attempt to follow this methodology school staff complied and analysed the feedback
given by an equivalent sample of 37 students not involved in the design team during oral and
written sessions for each of the categories outlined above. Although a statistical analysis of
the results was performed, for our purposes it is sufficient to give an overview of these results
in qualitative terms. See Table 1.
Table 1.
Category
Learnability
Familiarity
Guessability
Consistency
Predicatability
Synthesizability
Generalizability
Flexibility
Dialogue
initiative
Adaptability
Customizability
Substitutability
Robustness
Observability
Recoverability
Task
conformance
Responsiveness
Response
Students generally reported a positive first impression of the software. The
students felt confident that they could ‘guess’ how the system would work
and could initiate interaction. Students reported confidence that the software
demonstrated consistency of behaviour arising from similar task objectives.
However, students were less successful in being able to predict which
operations could be performed by the software or assess the effect of part
operations on the current state of the software. It also proved to be difficult
to assess if the students could extend their knowledge of specific interaction
behaviour to similar but previously un-encountered situations.
All students felt that the software could have been improved by being able to
initiate more varied dialog with the user. The students were impressed with
help facilities and the ability of the software to adapt its audio and visual
feedback to the user but the possibility of the software performing this
function automatically was raised by several students.
Roughly one third of the students related that they were impressed with the
software’s ability to allow the user to substitute equivalent values, i.e.,
inches and centimetres.
The majority of students felt that the software would have benefited by more
visible indications of the internal state of the system. Criticism was generally
levelled at the lack of indication of when the software was loading or saving
files, searching for information or performing a calculation.
All of the design students were initially aware of the need for users to be
able to take corrective action once an error had been recognised, but this
facility proved to be beyond the ability of the students to implement.
Roughly three quarters of the students used for testing commented on this
lack of functionality but nearly all students reported that the software
supported all of the tasks that the user was required to perform.
The responsiveness, or rate of communication between the system and the
user, proved to be impossible to ascertain as the software was necessarily
installed on a network system and therefore affected by the responsiveness
of the platform on which it was installed.
EVALUATION
In evaluating the success, or otherwise, of the collaboration between the Mosslands School
and the Learning Lighthouse and the role taken by the students in the software design process
we are essentially asking two questions:
1. Did partnership with a CLC offer generate opportunities and learning outcomes
unavailable to the school acting alone? Clearly the answer to this question is yes.
Staff and students appreciated the opportunity to work in new environments doing
activities regarded as cutting edge. Students benefited from having their views and
opinions listened to and acted upon and derived immense satisfaction from seeing
their name on the author credits of the software. School staff gained by having the
opportunity to enhance their own skills and experimenting with new ways of relating
with the students. Staff at the CLC gained valuable insight about how their activities
need to interact with the requirements of the school. They also gained valuable
experience in alternative ways of setting up their hardware and software platforms to
enhance productivity.
2. Did the collaboration of the students in the design of the software result in a useful
product, and did the students gain anything from the experience? Again, the answer is
yes. Examination of the student responses in Table 1 indicate that the software
performed all of the teaching objectives in the design but that it was insufficiently
flexible or robust. The initial reaction to these results would seem to indicate that
these are major failings. However, it is argued that the vast majority of educational
software purchased by schools also fails under these same criteria. In addition,
implementation of these criteria often requires industrial or research level expertise
and it is unrealistic to expect students to be able to implement at this level. It is to the
students credit that they recognise the shortcoming of the software they helped to
design even if they were not able to address them.
CONCLUSION
Modules of study based around the software have been incorporated into the Key Stage 3
scheme of work for ICT and Science for his academic year. Demonstrations of the software to
representative of the local authority are scheduled for early October and we have agreed a
schedule for training staff from other schools in the software used in the implementation.
More projects are being planned in collaboration with the Learning lighthouse for next year.
Of particular interest is the intention to investigate whether the use of software designed in
collaboration with students has a significant impact on the measurable development of
thinking skills.
REFERENCES
Blomberg, J., Giacomi, J., Mosher, A., & Swenton-Wall, P. (1993). Ethnographic field
methods and their relation to design. D. Schuler, & A. Namioka (Eds.), Participatory design:
Principles and practices (pp. 123-156). Hillsdale, New Jersey: Lawrence Erlbaum.
Burov, A. N. (1991). Development of creative abilities of students on the basis of computer
technology. First Moscow International HCI'91 Workshop (pp. 289-296).
Dix. A. & Finlay. J. & Abowd. G. & Beale. R. [[1997] Human Computer Interaction 2nd edn,
Prentice Hall. Harlow.
Druin, A. (2002). The role of children in the design of new technology. Behaviour and
information technology, 21(1) 1-25.
Druin, A. (1999). Cooperative inquiry: Developing new technologies for children with
children. Human Factors in Computing Systems: CHI 99 (pp. 223-230). ACM Press.
Fell, H. J., Ferrier, L. J., Delta, H., Peterson, R., Mooraj, Z., & Valleau, M. (1994). Using the
Baby-Babble-Blanket for infants with motor problems: An empirical study. First Annual
ACM Conference on Assistive Technologies (pp. 77-84). ACM Press.
Goldman-Segall, R. (1998). Points of viewing children's thinking. Mahwah, NJ: Lawrence
Erlbaum Associates.
Greenbaum, J., & Kyng, M. (1991). Design at work: Cooperative design of computer
systems. Hillsdale, NJ: Lawrence Erlbaum.
Jones, M. L. W. & Balka, E., (1998). Planning and implementing participant re-design of
middle school mathematics software: The second phase of the IPS/PDG/ATIC project. ATICDL Report 98-02.
Koenemann, J., Carroll, J. M., Shaffer, C. A., Rosson, M., & Abrams, M. (1999). Designing
collaborative applications for classroom use: The LiNC project. A. Druin (Ed.), The design of
children's technology. San Francisco, CA: Morgan Kaufmann.
Mumford, E., & Henshall, D. (1979). Designing participatively: A participative approach to
computer systems design. UK: Manchester Business School.
Nicol, A. Interface design for hyperdata: Models, maps, and cues. Human Factors Society
32nd Annual Meeting (pp. 308-312).
Plowman, L. (1992). An ethnographic approach to analyzing navigation and task structure in
interactive multimedia: Some design issues for group use. Conference on People and
Computers: HCI'92 (pp. 271-287).
Plowman, L., Luckin, R., Laurillard, D., Stratfold, M., & Taylor, J. (1999). Designing
multimedia for learning: Narrative guidance and narrative construction. Human Factors in
Computing Systems: CHI 99 (pp. 310-317). ACM Press.
Rowntree. D, [1994], Preparing Materials for Open, Distance and Flexible Learning, Kogan
Page, London.
Salzman, M. C., Dede, C., & Loftin, R. B. (1999). VR's frames of reference: A visualization
technique for mastering abstract multidimensional information. Human Factors in Computing
Systems: CHI 99 (pp. 489-495). ACM Press.
Scaife, M., Rogers, Y., Aldrich, F., & Davies, M. (1997). Designing for or designing with?
Informant design for interactive learning environments. Human Factors in Computing
Systems: CHI 97 (pp. 343-350). ACM Press.
Scaife, M., & Rogers, Y. (1999). Kids as informants: Telling us what we didn’t know or
confirming what we knew already. A. Druin (Ed.), The design of children's technology. San
Francisco, CA: Morgan Kaufmann.
Schuler, D., & Namioka, A. (1993). Participatory design: Principles and practices. Hillsdale,
NJ: Lawrence Erlbaum.
Steyn, D. (2001). ITFORUM PAPER #53 - The value of students as part of the design team
for educational software. Posted on ITFORUM April 28, 2001. Retrieved 2nd of May 2001
from the World Wide Web: http://it.coe.uga.edu/itforum/home.html
Strommen, E. (1998). When the interface is a talking dinosaur: Learning across media with
Actimates Barney. Human Factors in Computing Systems: CHI 98 (pp. 288-295). ACM
Press.
Wu, J. (2000). Accommodating both Experts and Novices in One Interface.
http://www.otal.umd.edu/UUGuide/jingwu/. 13/09/01.