Original article doi: 10.1111/jcal.12092 bs_bs_banner Stop talking and type: comparing virtual and face-to-face mentoring in an epistemic game E.A. Bagley* & D.W. Shaffer† *LeapFrog Enterprises, Emeryville, California, USA †Wisconsin Center for Education Research, University of Wisconsin-Madison, Madison, Wisconsin, USA Abstract Research has shown that computer games and other virtual environments can support significant learning gains because they allow young people to explore complex concepts in simulated form. However, in complex problem-solving domains, complex thinking is learned not only by taking action, but also with the aid of mentors who provide guidance in the form of questions, instructions, advice, feedback and encouragement. In this study, we examine one context of such mentoring to understand the impact of replacing face-to-face interactions between mentors and students with virtual, chat-based interactions. We use pre- and postmeasures of learning and a post-measure of engagement, as well as epistemic network analysis (ENA), a novel quantitative method, to examine student and mentor discourse. Our results suggest that mentoring via online chat can be as effective as mentoring face-to-face in appropriately structured contexts more generally – and that ENA may be a useful tool for assessing student and mentor discourse in the context of learning interactions. Keywords design-based research, epistemic network analysis, mentoring, online learning, virtual environment. Introduction Research has shown that computer games and other virtual environments can support significant learning gains because they allow young people to explore complex concepts in simulated form (Clark et al., 2009; Dondlinger, 2007; Gee, 2007b; Honey & Hilton, 2011; Squire, 2011; Vogel et al., 2006; Wilson et al., 2009). Virtual environments allow young people to solve Accepted: 11 October 2014 Correspondence: David Williamson Shaffer, Wisconsin Center for Education Research, University of Wisconsin-Madison, Educational Sciences Room 1069, 1025 West Johnson Street, Madison, WI 53706, USA. Email: [email protected] The opinions, findings and conclusions do not reflect the views of the funding agencies, cooperating institutions or other individuals. This study was approved by the University of Wisconsin Institutional Review Board. 606 simulations of complex problems, helping them learn real-world skills, knowledge and ways of thinking. By complex problems, we mean problems that do not have well-formed solutions – problems that cannot be solved by applying any specific algorithm or set of steps defined in advance (Lynch et al., 2009; Voss, 2014). Such problems, which require the exercise of judgement and discretion, are characteristic of work in many professions and other real-world contexts (MaroudaChatjoulis & Humphreys, 1997; Schön, 1983, 1987; Shaffer, 2007). In simulations, complex problems can be scaffolded in a dynamic model, in the sense that the simulation can make it possible for students to do things that would otherwise be too expensive, dangerous or difficult to accomplish in a classroom setting (Shaffer, 2007). Thus, in simulations, young people have opportunities to take action in complex domains. © 2014 John Wiley & Sons Ltd Journal of Computer Assisted Learning (2015), 31, 606–6 22 Virtual and face-to-face mentoring In complex problem-solving domains, however, learners do not always develop skills merely by trying to solve problems. In many instances, complex thinking is learned not only by taking action, but also with the aid of mentors: more experienced individuals who provide guidance in the form of questions, instructions, advice, feedback and encouragement. While some advocates of digital technologies for learning suggest that students will be able to learn from games, simulations and other digital environments without adult mentoring (Bennett, Maton, & Kervin, 2008; Ito, 2010a, 2010b; Resnick, 1994), many researchers argue that students’ understanding of their experiences in pedagogical simulations needs to be shaped in conversation with peers and with the teacher through additional learning activities set around the simulations themselves (Gee, 2007b; Squire, 2005). One way to accomplish this, of course, is to integrate these conversations into the simulation itself. For example, epistemic games are pedagogical simulations that model both complex, non-routine problems and the processes by which professionals are trained to solve them by replicating the structure of an internship or other professional practicum. Key to the pedagogy of such practica is the interplay of action and reflection: students not only solve problems, they also talk about their problem-solving process with peers and mentors. Thus, in epistemic games, learning is supported by interactive mentoring (Klecka, Cheng, & Clift, 2004; Larson, 2006; Shaffer, 2007). During an epistemic game, students role-play as members of some community of practice (Lave & Wenger, 1991). They solve a realistic, complex problem for which there is no optimal solution, which typically involves reading and analysing research reports, generating and testing hypotheses using built-in problem-solving tools, writing proposals and reports, and presenting and justifying their proposed solutions. In their role as interns, students communicate with one another, with characters in the game such as their supervisor and concerned citizens, who are known generically as non-player characters or NPCs, but also with a human playing the role of a mentor. This trained human mentor facilitates students’ work in the epistemic game with the help of scripts, providing direction, answering questions and helping students to frame, investigate and solve complex problems. In regularly scheduled reflection meetings, the mentor © 2014 John Wiley & Sons Ltd 607 helps students discuss previously completed activities and plan the next steps in the project. Such games are ‘epistemic’ in the sense that the activities of the simulation mirror the epistemological structure in which newcomers are initiated into a realworld community of practice (Lave & Wenger, 1991; Shaffer, 2007). A critical feature of epistemic games that distinguishes them from other educational or serious games is that they typically include interactions with mentors in the game environment. Epistemic games thus provide a model of simulated problem solving that incorporates mentoring into the simulation itself. But – and this is a critical point – including mentoring within the simulation changes the mentoring that shapes students’ understanding from a face-to-face interaction into a virtual interaction (via online chat). An epistemic game therefore provides an occasion to look at the impact of changing in-person mentoring, which is common in many communities of practice, into mediated or virtual mentoring, which is characteristic of simulation or game-based learning environments. In this study, we investigate the impact of this shift. We developed two different versions of one specific epistemic game and tested them in a randomized, controlled study – albeit one at a very small scale. Specifically, we examine the reflection meetings between mentors and students in a face-to-face condition and a virtual (online chat) condition. We used a mixedmethods approach to explore the effect of mentoring method on (a) mentors’ discourse, (b) students’ discourse, (c) students’ learning outcomes and (d) students’ level of engagement with the intervention. To accomplish this comparison, we used epistemic network analysis (ENA), a method of quantitative analysis of logfile data, to examine player and mentor discourse. ENA is described in more detail below, but briefly, ENA models the extent to which the discourse of an individual (or group), represented in utterances in a logfile, reflects the discourse practices of some target community. ENA does this by creating a network model of the way in which concepts used by individuals (or groups) are connected to one another in the discourse data. In this way, it documents the development of and connections among elements of professional thinking in a given domain. These data are represented in a dynamic network model that quantifies changes in the strength and composition of these connections over time. 608 Of course, this study took place in the context of one specific simulation-based learning environment. But our goal was to understand the impact of different delivery modes on the qualities of the mentoring process – and thus to explore the impact of mentoring delivered in simulated form more generally. Theory Virtual mentoring Bierema and Merriam (2002) define virtual mentoring as ‘a computer mediated, mutually beneficial relationship between a mentor and a protégé which provides learning, advising, encouraging, promoting, and modeling that is often boundaryless, egalitarian, and qualitatively different than traditional face-to-face mentoring’. They argue that virtual mentoring has several advantages over traditional face-to-face mentoring in that it reduces or eliminates the barriers posed by geography, time, race, gender, age and hierarchy. Because computer-mediated interactions can offer a context for communication between diverse parties, they argue, virtual mentoring holds ‘the potential to erode some of the traditional power dynamics that tend to structure mentoring relationships’ (Bierema & Merriam, 2002, p. 220). Virtual mentoring via online chat or e-mail does not include the visual cues that can create or reinforce biases, stereotypes and other predispositions harmful to the mentoring relationship, so it has the potential to reduce disadvantage among groups poorly served by traditional mentoring (Ensher, Heun, & Blanchard, 2003). In the context of a learning game or simulation, of course, virtual mentoring does not necessarily imply automated mentoring. It is possible for a real person to act as a mentor through chat and other mediated or virtual connections. However, one advantage of virtual mentoring is that some aspects of virtual mentoring can be automated (Linn et al., 2014; Morgan et al., 2013; Shaffer & Graesser, 2010). Automating aspects of mentoring can relieve human mentors from addressing domain-specific questions or providing basic resources, enabling them to focus on relationship building, individualized guidance and other higher order tasks. There are other potential advantages as well. For example, virtual mentoring may be less expensive than face-to-face mentoring because one mentor can support E.A. Bagley & D.W. Shaffer more students virtually than would be possible face-toface. Virtual mentoring has also been shown in some cases to improve academic performance as well as networking and career opportunities for mentees (see De Janasz, Ensher, & Heun, 2008 for a recent review). However, the body of research on the outcomes of virtual mentoring is quite small, and almost no studies comparing virtual and face-to-face mentoring have been conducted (Miller & Griffiths, 2005). Although there are advantages to virtual mentoring, there are potential disadvantages as well. Brennan and Lockridge (2006) argue that in chat-based interactions, mentors have no access to mentee’s body language, tone of voice or other signals that can only be detected in a shared physical environment, and as a result, miscommunication can occur. Although Whittaker (2003) found that people communicate clearly and easily over a wide variety of media, including those with relatively low bandwidth like online chat programs, virtual media can be limiting in other ways. For example, the e-Mentoring for Student Success program was designed to facilitate mentoring of middle-school science teachers by professional scientists and more experienced teachers. However, early assessments indicated that junior teachers were much more likely to use the program to obtain basic advice or resources, not to improve their content knowledge or teaching skills, which was the original intent (Jaffe et al., 2006). Thus, even when communication fidelity is not an issue, the richness of interactions may be reduced in virtual mentoring contexts if appropriate scaffolds are not in place or if mentors and mentees have different expectations. Given the relative paucity of research on the efficacy of virtual mentoring even in the past 5–10 years, it is unclear whether the constraints of virtual mentoring – namely the possibilities for lost information, miscommunication or reduced engagement discussed by researchers such as Brennan and Lockridge (2006) – outweigh the affordances. Therefore, this study explores whether mentor communication with students through a virtual chat program rather than face-to-face changes anything about the students’ experience of an educational intervention. To compare these two forms of mentoring, we measured the quantity, quality and impact of (a) the discourse content from reflective conversations between students and mentors, (b) students’ learning outcomes and (c) students’ reported level of © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring engagement in two conditions of the epistemic game Urban Science. Urban Science This study examines mentoring in Urban Science, an epistemic game for high school students designed to simulate a practicum in urban planning (Bagley & Shaffer, 2009, 2011; Beckett & Shaffer, 2005; Shaffer, 2007). In Urban Science, students play the role of interns at an urban planning firm tasked with rezoning a local community to address social, ecological and economic concerns. They receive a city budget plan and letters from community groups that want a say in the redevelopment process. Using a geographic information system model of the region, student teams can explore the effects of changes on various indicators. For example, they can see how a change in zoning to accommodate a large retail store will increase jobs but will also increase waste and traffic. Using these resources, students must propose a redevelopment plan that is within the city’s budget and satisfies the community groups, many of whom have conflicting demands. This requires students to make compromises and justify their decisions, as no one plan can meet all requests. The goal of Urban Science, and of epistemic games more generally, is to help students learn how to frame, investigate and solve problems in the way that communities of practice in the real world do. As with the internships and professional practica on which they are modeled, what distinguishes an epistemic game from other learning environments is the combination of action, the ability to do authentic, meaningful work, and reflection-on-action (Schön, 1983, 1987; Shaffer, 2003), thinking about what went well, what did not, and why, and then discussing these thoughts with peers and mentors. Mentors are critical to the learning process because they help learners articulate their reflections in ways that are meaningful in the context of a given practice. For this study, we developed two versions of Urban Science. In one version (the face-to-face condition), mentors interacted with students in person throughout the intervention. In the other version (the virtual condition), human mentors interacted with students via a text-based chat system throughout the intervention. (Both versions of the intervention are described in more detail in the Methods section.) © 2014 John Wiley & Sons Ltd 609 Measures We used several approaches to investigate the differential impact of face-to-face and virtual mentoring in Urban Science. Quantity of discourse Research has shown that higher word counts often correlate with higher quality discourse (Pennebaker, Francis, & Booth, 2007), but word counts are most often paired with qualitative analyses or more rigorous quantitative methods to understand the complexities of the discourse. For example, Schneider et al. (2002) used word counts to compare online and face-to-face focus group participants’ discourse. Their word count comparison showed that online participants tended to contribute fewer words to their discussions than the face-to-face participants. Qualitative analysis showed that online participants were less likely to explain their opinions or to provide detailed insight into the thinking that led them to their conclusions. Quality of discourse The quality of mentor and mentee discourse can be measured by exploring the ways in which characteristics of discourse are representative of professional thinking – in this case, thinking like an urban planner. The complete coding scheme with which we analysed student and mentor discourse is given in Appendix I, but in general, our analysis focused on words and phrases indicative of urban planning attributes, such as domain-specific skills, values, identity, and knowledge, as well as language characteristic of explanation, justification and other epistemological elements. This approach is particularly appropriate for learning environments that are modeled on real-world practices, such as Urban Science and epistemic games in general. Lave and Wenger (1991) argue that communities of people who share a common body of knowledge, a set of skills, a value system and a set of decision-making processes are communities of practice. Epistemic frame theory suggests that any community of practice has a distinct epistemic frame that consists of the combination – linked and interrelated – of skills, knowledge, identity, values and epistemology (Shaffer, 2006, 2007). Skills, of course, are the things that people within a profession do, and knowledge comprises the understandings that people in the profession share. E.A. Bagley & D.W. Shaffer 610 Identity is the way that members of the profession see themselves, values are the beliefs that members of the profession hold, and epistemology concerns the warrants that justify actions or claims as legitimate within the profession. Central to epistemic frame theory is the idea that the discourse practices of a community can be modeled by the linkages between epistemic frame elements. Skills are always linked to some form of knowledge, values, identity and epistemology (and each of the other elements are, in turn, associated with all the others); however, they are not always linked to the same ones or in the same ways. Thus, modeling the structure of the linkages between epistemic frame elements can be used to measure the quality of discourse in an epistemic game (Shaffer, 2006). ENA, described in more detail in the Methods section, uses a network model to quantify the structure of connections among frame elements (skills, knowledge, values, identity and epistemology) of a community as expressed in the discourse of individuals or groups. In the context of mentor and student discourse in Urban Science, this provides a useful means to compare the qualities of the discourse practices of individuals in different conditions of the intervention. Engagement Many games and simulations are used in education because their narrative structure is engaging to young people (Gee, 2007a, 2007b; Shaffer, 2007; Squire, 2011). Research on engagement in narratives suggests that the extent to which one becomes engaged, transported or immersed in a narrative influences the narrative’s potential to affect subsequent story-related attitudes and beliefs (Busselle & Bilandzic, 2008). Green and Brock (2000) argue that engagement can be measured by quantifying the extent to which individuals are absorbed into a story or transported into a narrative world. Green and Brock write about transportation into a text-based narrative world (e.g., a novel), but they argue that transportation is not limited to the reading of written material. Rather, narrative worlds are broadly defined with respect to modality; the term ‘reader’ may be construed to include listeners, viewers or any recipient of narrative information. Whether the narrative is fictional or non-fictional, the same processes involved in transportation are theorized to occur. To measure student engagement in the two conditions of Urban Science, we used Green and Brock’s validated measure of this transportation effect, adapted to fit the Urban Science learning environment (Green & Brock, 2000). Research questions In this study, we asked four research questions comparing the face-to-face and virtual conditions of Urban Science: 1. Were there differences in mentors’ reflection meeting discourse between the two conditions? 2. Were there differences in students’ reflection meeting discourse between the two conditions? 3. Were there differences in students’ learning outcomes between the two conditions? 4. Were there differences in students’ level of engagement between the two conditions? Methods Participants Twenty-one high-school-aged children (11 girls, 10 boys) were recruited by outreach specialists at the Massachusetts Audubon Society’s Drumlin Farm Wildlife Sanctuary. Participants used a 10-h version of Urban Science as part of a week-long Conservation Leadership Programme in August 2010. The two mentors (called planning consultants in the epistemic game) were an education researcher (the primary author) and a Drumlin Farm education specialist, neither of whom had prior experience or training in urban planning. Both mentors underwent a 1-day training that covered the urban planning profession, the simulation’s activities and mentoring strategies. Intervention: the Urban Science epistemic game In the Urban Science epistemic game, students log in to an office intranet portal, through which they receive instructional e-mails from an NPC supervisor controlled by the mentor. Students are asked to produce land use plans for redevelopment of a local community. To make these plans, students work in teams that represent the interests of a single stakeholder group. They conduct research during a virtual site visit, in which © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring they learn about their assigned group of stakeholders and what those stakeholders think is important. They conduct preference surveys, in which they work with their colleagues to identify specific stakeholder targets for various indicators, such as housing, water quality or revenue. To create the preference surveys, students use a geographical information system mapping tool called iPlan, which allows them to model the social and environmental impacts of land use changes. iPlan is an interactive Google map of the community with each zone coloured according to its zoning code (e.g., residential, commercial, industrial, mixed use). If students change the zoning code for a region, which they can do by clicking on a zone to pull up a menu of zoning codes, they can see the specific effects the change would have on various indicators. Impact indicators include jobs, sales, housing and pollution levels. Once the student teams complete their preference surveys, the stakeholder groups provide feedback, which allows students to determine how much change to the site would satisfy their team’s stakeholder group. Finally, students create a land use proposal, in which they attempt to meet the needs of both the stakeholders they researched and the stakeholders with which the two other teams worked. In the final proposal, they create a plan for redevelopment and describe and justify their recommendations, as well as the limitations and compromises they needed to make. Throughout the epistemic game, each team works with a mentor. Mentors answer questions, provide suggestions and support, and guide students’ reflections on their work. For this experiment, students were randomly assigned to one of two mentoring conditions: face-toface or virtual (online chat). In both conditions, students were randomly divided into teams. Students completed group activities, including reflection meetings, with their assigned teams. Students in both conditions played the game in a computer lab, and each student had individual use of a computer. The same two individuals served as mentors in both conditions. The mentors were education specialists trained to mentor students in Urban Science with the help of scripts and general guidelines. The mentors were not subject matter experts in urban planning. Both mentors had used Urban Science with students in both versions prior to the experiment. There were also two © 2014 John Wiley & Sons Ltd 611 adults physically present in the room with the students in the virtual chat condition. Students were told that those adults, both education researchers, were in the room to help with technical problems, and that questions dealing with the epistemic game should be sent to the mentors via chat. In both conditions, mentors were given a script to follow which provided guidance on how to respond to different situations, and they were instructed to keep the conditions comparable. Everything else about the two conditions was the same or as close as possible. As part of the game activities of both conditions, mentors held four reflection meetings in which they asked students a series of four scripted questions regarding what work they had finished doing, what they learned during the last activity, what they thought should happen next and what additional information would be helpful. The mentors were instructed to listen to the responses before interjecting. Data collection, analysis and coding Three sources of data were collected in both conditions of Urban Science: students’ intake and exit interview responses and discourse data from the reflection meetings. In both conditions, the online portal recorded the students’ intake and exit interview responses. In the chat condition, all of the students and mentors’ reflection meetings were recorded by the online portal. In the face-to-face condition, the reflection meetings were audio-recorded and transcribed. Intake and exit interviews Pre- and post-tests that had been used in previous experiments with Urban Science were incorporated into the intervention as intake and exit interviews. Responses from a matched-pair question in the intake and exit interviews were analysed to determine whether the students’ learning outcomes were different between conditions. The matched-pair question asked students to consider possible solutions to improving the water quality in a lake or river: The town of Maple Ridge, MI [Forest Hill, CO]1 is concerned about high levels of nitrates and carbon tetrachloride in their lakes [rivers]. What could they do to clean up their lakes [rivers] if they care most about reducing the level of nitrates (NO3) [carbon tetrachloride (CCl4)]? E.A. Bagley & D.W. Shaffer 612 Table 1. Exit Interview Questions Used to Measure Engagement. The Questions Were Adapted From Green and Brock’s (2000) Narrative Questionnaire to Fit the Epistemic Game Environment Question No. E1 E2 E3 E4 E5 E6 Question text While I was in the internship, I could easily picture the events in it taking place I could picture myself in the internship I was mentally involved in the internship while it was going on After finishing the internship, I found it easy to put it out of my mind I wanted to learn how the internship would turn out I found my mind wandering while doing the internship Students’ responses were scored a 0, a 1 or a 2: a 0 indicates an incorrect response, a 1 indicates a partially correct response, and a 2 indicates a correct response. To receive a 2, an answer must (a) accurately identify one or more land use changes that would reduce NO3 and CCl4 and (b) link the land use change to the desired effect. For example, the following is a 2-point answer: ‘By reducing the number of factories and increasing the number of wildlife sanctuaries, both the CCl4 and NO3 levels should decrease’. A 1-point answer accurately identifies some aspect of this relationship between land use and pollution, but stops short of drawing an explicit connection between cause and effect. For example, ‘get rid of factories’ is a 1-point answer. Answers such as ‘I don’t know’ received 0s, as did those that are incorrect, such as ‘Maybe they good introduce good chemicals that feed on the CCl4’. As part of the exit interview, students were asked six 4-point Likert scale (1 = strongly disagree, 4 = strongly agree) questions to measure their level of engagement during the game (see Table 1). The mean scores for each of the six questions were calculated within each condition, and t-tests were used to compare the responses between conditions. Reflection meeting discourse Mentor and student discourse from four reflection meetings was analysed to determine whether the discourse was different between conditions. The reflection meetings were segmented by conversational turn and coded using a set of 21 codes developed using the American Planning Association’s description of what professional planners know, do and care about (http://www.planning.org/). The complete set of codes, including example excerpts, is provided in Appendix I. While coding the data, the coder read each excerpt separately and applied one code (1 = presence, 0 = absence) at a time. An educational psychology researcher working in a non-planning domain was trained as a secondary coder and independently coded 150 randomly selected excerpts. For all codes, the primary coder and the secondary coder had a Cohen’s kappa greater than 0.6 (Landis & Koch, 1977). The reflection meetings were analysed qualitatively, as described in the Results section, and ENA was used to triangulate the qualitative findings. As described in more detail below, ENA measures relationships among epistemic frame elements within an epistemic network (Nash & Shaffer, 2013; Rupp et al., 2010; Rupp, Sweet, & Choi, 2010; Shaffer et al., 2009). The urban planning epistemic frame was characterized by individual epistemic frame elements – in this case, the 21 urban planning codes developed as described above – which were applied to each conversational turn in the data. For each participant, we constructed 16 cumulative coding vectors. Each vector represents the discourse elements (codes) used by that participant in the discussion of one of the four questions in one of the four reflection meetings. Epistemic frame theory (and thus ENA) looks at the connections between frame elements as the key variables in modeling student thinking. Accordingly, each cumulative coding vector was converted into an adjacency matrix showing which pairs of frame elements were co-present in the students’ discourse during the discussion. That is, if participant p in discussion topic (question) q in reflection meeting m had both frame elements j and k coded in his or her discourse, then the adjacency matrix element p Aqj ,,km = 1. Because we were modeling connections between frame elements, the diagonal of each adjacency matrix was set to 0 (i.e., p q,m A j , j = 0). We then constructed a cumulative © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring adjacency matrix for each participant p, which represents the pattern of association between epistemic frame elements across the reflection meetings: C jp,k = ∑ p Aqj ,,km. q,m To control for differences in level of participation, the cumulative adjacency matrices were normalized by dividing each value in the matrix by the root mean square of the values in the matrix. Finally, the normed cumulative adjacency matrices for all participants were projected into a metric space by representing each 21 × 21 matrix as a vector with 210 entries, one for each cell in the upper triangle of the cumulative adjacency matrix. The vectors had 210 entries (or dimensions) in the metric space because each of the adjacency matrices was symmetric and had a constant (0) on the diagonal; thus they could be represented by the 210 (20 choose 2) entries in their upper triangles. We performed a singular value decomposition on the vectors representing the cumulative adjacency matrices. This produced a rotation of the adjacency matrices in the metric space that maximized the variance among the matrices. We used the first and second dimensions of the rotated space (the dimensions that captured the most variance in the data) to model the structure of discourse among participants. This process was repeated for mentor discourse across all of the meetings in both experimental conditions, producing a rotated space representing the maximum variance in mentor discourse during the reflection meetings. 613 Results Research question 1: were there differences in mentors’ reflection meeting discourse between the two conditions? The mean word counts of mentors’ discourse were computed for each team during each reflection meeting (a total of 3 data points for four meetings in each condition for a total of 24 data points). Across all reflection meetings, mentors in the face-to-face condition used significantly more words (M = 2857, sd = 755, p < 0.05)2 during interactions with their teams than mentors in the chat condition (M = 1244, sd = 327, p < 0.05). The same is true of comparisons between corresponding reflection meetings in both conditions (see Figure 1). An examination of the discourse of one mentor, Elise (pseudonym), working with the ‘People for Greenspace’ stakeholder team in both conditions, showed that during Reflection Meeting 1, Elise used nearly three times more words in the face-to-face condition (1284) than in the chat condition (433). In both conditions, Elise asked students the same question (see Table 2), but she used more words in the face-to-face condition (66) than in the chat condition (12). Although she used more words in the face-to-face condition, the question was similar across the two conditions. In the chat condition, she said, ‘So, with the information that we have, what should we do next?’ Similarly, in the face-to-face condition, Elise said: ‘[I]f you have information about the site, what do we do now as planners? What’s our next step?’ The role of quantitative analyses Because of the small size of the samples (2 mentors and 21 students), qualitative analyses were necessary in some cases. Where possible, quantitative analyses were used, and results were reported as means with standard errors. In some cases, inferential statistics were also computed; however, as with any small-scale study, the results were not generalizable to other populations. Thus, the purpose of such significance tests is to show that additional observations made under the same conditions would show similar results (Shaffer & Serlin, 2004). © 2014 John Wiley & Sons Ltd Figure 1 Mean Word Counts of Mentors’ Discourse During Reflection Meetings with Standard Error.3 Mean Word Counts for All of the Meetings Were Greater in the Face-to-Face Condition Than in the Chat Condition E.A. Bagley & D.W. Shaffer 614 Table 2. Excerpt from Elise’s Discourse During Reflection Meeting 1 for the People for Greenspace Stakeholder Team Showing That When Asking Students What They Should Do Next (Bold Italics), She Used More Words in the Face-to-Face Condition Than in the Chat Condition Chat (word count = 12) Face-to-face (word count = 66) So, with the information that we have, what should we do next? Well so what does that mean okay, I don’t want you to look at the calendar and just tell me what the calendar says ok. I really want you to think like planners ok. I want you to think about what, if you have information from your stakeholders, if you have information about the site, what do we do now as planners? What’s our next step? Although the main discourse elements were similar (asking about next steps), Elise provided additional information in the face-to-face condition to contextualize her request: she addressed a student’s concern about the calendar, made explicit references to the students as ‘planners’ and used the term ‘stakeholders’. She also repeated herself and used features of face-toface talk, including filler words (Tannen, 1982) such as ‘well’, ‘so’ and ‘okay’, which contributed to the higher word count. Thus, there are a number of reasons why the word count was greater in the face-to-face condition than in the chat condition. Similarly, during Reflection Meeting 2 in both conditions, Elise talked about generating hypotheses with data or an interactive model by informing the students that ‘iPlan measures the projected social and environmental impacts of zoning changes’ (see Table 3). In both conditions, she discussed iPlan’s ability to ‘test ways of making the site work for the stakeholders without bringing in actual bulldozers’ and ended that portion of Reflection Meeting 2 by reminding students in both conditions that the site ‘is a complex system, which means that changing one parcel impacts more than one indicator’. She also informed students in both conditions that ‘there may be trade-offs with every change’. As in Reflection 1, Elise’s discourse in the face-to-face condition contained similar content to her discourse in the chat condition, but her face-to-face discourse contained additional filler words and verbal acknowledgements of what the students already said or knew: ‘. . . but what all of you were saying is . . . you all recognize that.’ Because Elise discussed similar content regardless of condition, her discourse showed similar patterns of epistemic frame co-occurrence between conditions. These similar patterns are illustrated by the locations of the mentor points (means) for each condition in Figure 2, where points closer together have more similar patterns of co-occurrence than points farther apart. Meeting by meeting, t-tests on ENA-generated discourse means for both chat and face-to-face conditions showed no significant differences (see Table 4). In other words, the variance between the meetings was larger than the variance between the conditions. Table 3. Excerpt From the People for Greenspace Stakeholder Teams’ Reflection Meeting 2 Showing (in Bold Italics) That in Both Conditions, Elise Covered Similar Content Chat Face-to-face Because iPlan measures the projected social and environmental impacts of zoning changes, it allows you to test ways of making the site work for the stakeholders without bringing in actual bulldozers. You discovered that one characteristic of the site is that it is a complex system, which means that changing one parcel impacts more than one indicator. There may be trade-offs with every change. Because iPlan can measure the projected social and environmental probability changes, it makes you test ways of making the site work for the stakeholders without actually bringing in bulldozers. Well you discovered one characteristic of the site, especially, but what all of you were saying is that it’s a complex system. . . That means that changing one parcel impacts more than one indicator and I think that you all recognize that. . . .So there may be trade-offs with every single change. © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring 615 Figure 2 Mentors’ Discourse During Reflection Meetings (Means) Showing That Regardless of the Communication Mode, the Mentors Covered Similar Content During the Reflection Meetings Research question 2: were there differences in students’ reflection meeting discourse between the two conditions? The mean word counts were computed for each student during each reflection meeting (a total of 21 data points for four meetings for a total of 84 data points). Across all reflection meetings, students in the face-to-face condition used significantly more words (M = 1048, sd = 276, p < 0.05) than students in the chat condition (M = 585, sd = 155, p < 0.05). The same is true of comparisons between corresponding reflection meetings in both conditions (see Figure 3). An examination of the discourse of student teams who worked with the ‘People for Greenspace’ stakeholders in both conditions showed that during Reflection Meeting 1, students used twice as many words in the face-to-face condition (307) as in the chat condition (145) (see Table 5). As was the case with the mentors, although there were more words in the faceto-face condition, the main discourse elements were similar across the two conditions. In the chat condition, one student listed the social and environmental issues that the stakeholders cared about by saying, ‘People care about wetlands (habitats for sandhill cranes), greenspaces, water quality, and reduction of traffic.’ Similarly, in the face-to-face condition, one student discussed the social and environmental issues stakeholders cared about by saying that ‘it seemed like the wetlands and also like the culture and the community was also really important’. Again, as was true for the mentors, the student’s excerpt from the face-to-face condition included features of face-to-face talk, such as using the words ‘like’, ‘um’ and ‘well’, which contributed to the higher word count. Again, as with the mentors, the similarity of substantive discussion in students’ discourse across conditions is illustrated by the locations of the student points (means) for each condition in the ENA analysis (see Figure 4). Meeting by meeting, t-tests on ENA-generated discourse means for both chat and face-to-face conditions showed no significant differences, with one exception: comparison of the first dimension of each condition in Reflection Meeting 1 (p < 0.05) (see Table 6). Meeting Dimension Chat – M (N, SD) Face-to-face – M (N, SD) 1 1 2 1 2 1 2 1 2 0.59 (3, 0.04) 0.03 (3, 0.04) 0.05 (2, 0.25) 0.44 (2, 0.07) 0.39 (3, 0.08) 0.18 (3, 0.14) 0.56 (3, 0.08) −0.09 (3, 0.1) 0.49 (3, 0.06) 0.1 (3, 0.05) 0.16 (2, 0.1) 0.4 (2, 0.03) 0.32 (3, 0.55) 0.21 (3, 0.16) 0.32 (3, 0.16) 0.02 (3, 0.19) 2 3 4 © 2014 John Wiley & Sons Ltd Table 4. Means, Number of Mentor Points in the Mean (N), and SDs for Each Meeting and Each Condition With the Results of Paired t-Tests. There Were No Significant Differences Between the Means of the Conditions (p > 0.05) E.A. Bagley & D.W. Shaffer 616 Research question 3: were there differences in students’ learning outcomes between the two conditions? Figure 3 Mean Word Counts of Students’ Discourse in Reflection Meetings With Standard Error. Mean Word Counts for All of the Meetings Were Greater in the Face-to-Face Condition Than in the Chat Condition Students in both conditions significantly increased their scores (0–2 scale) from the intake to the exit interview on matched-pair questions (chat condition: mean intake = 0.2, mean exit = 1.4, p < 0.05; face-toface condition: mean intake = 0.27, mean exit = 0.91, p < 0.05) (see Figure 5). For example, in the face-toface condition, during the intake interview, one student suggested, ‘They could try to clean it out’. During the exit interview, the same student provided a much more specific, scientifically accurate answer, ‘Get rid of big factories in surrounding areas because that lowers the level on CCl4 and NO3.’ There was no significant Table 5. Excerpts From Individual Students’ Discourse During Reflection Meeting 1 for the People for Greenspace Stakeholder Team Showing That When Elise Asked the Students What They Had Just Finished Doing, Students Used More Words in the Face-to-Face Condition Than in the Chat Condition to Discuss Learning About the Stakeholders’ Desires (Bold Italics) While Completing the Virtual Site Visit Chat (word count = 40) Face-to-face (word count = 104) I finished the virtual site assessment, and am experimenting with iPlan. . .People care about wetlands (habitats for sandhill cranes), greenspaces, water quality, and reduction of traffic. Character is diverse people, natural beauty and wetlands, local businesses, parks, and community events. Um, well, I just finished the virtual site visit and did my site assessment and I found that a lot of the stakeholders cared about the wetlands there. They thought that was a very important thing to the Northside, but based on like the descriptions and stuff given as well like not from like the people but like just the overall description it seemed like the wetlands and also like the culture and the community was also really important and I think that yeah. They like they wanted a way to like keep up the culture and stuff without having to hurt the birds. Figure 4 Students’ Discourse From Reflection Meetings (With Means) Showing That Regardless of the Communication Mode, the Students Discussed Similar Content During the Reflection Meetings © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring 617 Meeting Dimension Chat – M (N, SD) Face-to-face – M (N, SD) 1 1 2 1 2 1 2 1 2 0.13 (10, 0.26) −0.17 (10, 0.18) −0.27 (7, 0.18) 0.04 (7, 0.2) −0.15 (9, 0.24) −0.01 (9, 0.29) −0.07 (7, 0.22) −0.19 (7, 0.24) −0.12 (10, 0.12) −0.13 (10, 0.25) −0.18 (9, 0.14) 0.15 (9, 0.27) −0.3 (8, 0.13) 0.13 (8, 0.13) −0.09 (10, 0.17) −0.18 (10, 0.3) 2 3 4 difference between the two conditions in either the intake or the exit interviews, so the communication mode with the mentors did not affect the students’ learning outcomes on this particular matched-pair interview question. Research question 4: were there differences in students’ level of engagement between the two conditions? There was no significant difference between the two conditions on the questions adapted from Green and Brock’s (2000) measures of engagement (see Figure 6). Discussion The results of the analyses above suggest that in both mentoring conditions, the patterns of discourse for both players and mentors were significantly different between reflection meetings that took place at different points during the Urban Science epistemic game. This is, of course, not surprising: they were talking about Table 6. Means, Number of Student Points in the Mean (N), and SDs for Each Meeting and Each Condition With the Results of Paired t-Tests. All of the p-values, Excluding the Comparison of the First Dimensions for Reflection Meeting 1, Are Greater Than 0.05, Which Means That There Were No Significant Differences Between the Means of the Conditions different parts of the planning process, and the resulting differences show up in both a direct examination of the discussions and in ENA models of the content of student and mentor talk. However, these results also suggest that regardless of the mentoring condition, there was no significant difference in the domain-relevant substance of the mentor’s interactions with players. Mentors’ talk addressed the same issues in both conditions. Similarly, students showed no significant differences in the substance of their reflection meeting discourse between conditions. Furthermore, in both measures of student engagement in the simulation using Green and Brock’s (2000) measure of transportation, students were similarly involved in the fiction of the game in both conditions. Despite concerns that virtual mentors’ interactions with students might leave out important components of communication, students in the virtual mentoring condition were as engaged as those in the face-to-face mentoring condition. The gains from intake to exit interviews were similar in both conditions, suggesting that having virtual mentors did not adversely affect students’ learning Figure 5 Students’ Mean Scores (With Standard Error Bars) for the Matched-Pair Interview Question. The Communication Mode With the Mentors Did not Affect the Students’ Learning Outcomes © 2014 John Wiley & Sons Ltd 618 Figure 6 Students’ Mean Scores (With Standard Error Bars) for the Exit Interview Engagement Questions, Which Show No Significant Difference Between the Two Conditions on These Measures of Engagement outcomes. As their responses to a matched-pair interview question showed, students used more scientific language and gave more specific recommendations for addressing an environmental problem after playing the game. This study identified that mentors used similar professional discourse to guide students through the epistemic game regardless of communication mode. Their mentoring led students in both conditions to use similar professional discourse and develop similar epistemic frames. In other words, the co-occurrence of epistemic frame elements within the discourse of both the mentors and the students in each reflection meeting followed similar patterns. These results suggest that the key function of the mentors in Urban Science, to communicate professional ways of thinking, was not diminished in the chat condition. Of course, there was one very important difference in discourse between the two conditions: both students and mentors used more words when communicating face-to-face than they did in chat. This is not particularly surprising because it is easier to speak than to type in many situations. But while it is clear that more words were used in the face-to-face communications, it is less clear that anything more of substance was said. Bierema and Merriam (2002) suggest that the richness associated with face-to-face conversation often diminishes when communication is electronic, but there are several possible explanations for why this might not have been the case in this experiment. First, it is possible – and perhaps even likely – that what is lost in the limited communication medium of chat was either peripheral to the professional substance of the conversation or was provided somewhere else in the epistemic game. The rich game context, including E.A. Bagley & D.W. Shaffer detailed instructions and feedback from the NPCs, the models and templates for professional products provided in the professional resources (e.g., the sample final proposal), and, of course, the experience of interacting with a sophisticated, virtual model of the physical and social environment, all supported the virtual mentoring. Nevertheless, that virtual mentoring can work as well as face-to-face mentoring with the same supports suggests that even the human interactions in a mentoring relationship can work virtually. A second possibility is that even though these data were collected in 2010, it is possible that these students were already accustomed to using chat messages for rich interpersonal communications. If this is the case, it suggests that as young people increasingly use chat to interact with one another, virtual mentoring of the kind examined in this study will become even more useful as an alternative (or supplement) to face-to-face mentoring interactions. In either case, however, an important corollary to the main findings is that in this study, ENA provided a useful tool for quantifying the patterns of discourse of both mentors and students during reflective discussions. ENA models simultaneously showed the differences in substance between different reflection meetings during the intervention, and the similarities in substance between parallel meetings across the different conditions. Moreover, the differences and similarities quantified by the ENA models clearly reflected results of a qualitative analysis of the content of player and mentor talk during different meetings. This study, of course, has a number of limitations. First, the small sample size means that any conclusions are limited to what the sample population did in the context of the epistemic game. Second, this paper uses only one near-transfer matched-pair interview question to highlight students’ environmental science learning gains (and the similarities of those gains between conditions), and it uses only one measure to compare students’ engagement between the conditions. More sophisticated measures of learning and engagement integrated into the intervention itself, what Valerie Shute terms stealth assessment, could provide more robust results in future studies (Gee & Shaffer, 2010; Phillips & Popović, 2012; Shaffer, 2009; Shaffer & Gee, 2012; Shute, 2011; Shute & Ventura, 2013; Williamson et al., 2004). Third, by focusing solely on the reflection meetings and interviews, this study © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring examines only some of the mentor–student interactions that comprise the learning experience for students in Urban Science. Because putting thoughts into writing may encourage deeper reflection, for example, future studies should control for this possibility and also examine other kinds of mentor–student interaction. Fourth, the mentors in this study were instructed to follow a script while leading reflection meetings, which might have limited the interactions with students that mentors (in either condition) may have had – although we note that in both conditions, mentors added additional material to the script, which did not change the underlying results of the experiment. Last, one significant difference between the face-to-face and the chat conditions is that in the chat condition, students could review earlier parts of conversations (e.g., during a reflection meeting) by scrolling back through the previous chats. Future studies should investigate the impact on learning that may result from having a record of conversations to which students can refer back. Despite these limitations, these results suggest that because using more words did not affect the quality of the students’ professional discourse during the reflection meetings, their post-test outcomes or their level of engagement, chat is a viable method for mentoring in the context of epistemic games. Moreover, these results have the potential to influence the design, implementation and assessment of virtual environments by showing that that mentoring via chat can be as effective as mentoring face-to-face in appropriately structured contexts more generally – and that ENA may be a useful tool for assessing student and mentor discourse in the context of learning interactions. Notes 1 Text in brackets denotes the matched-pair text. The paired Mann–Whitney U-test was also significant: z = −3.776, p < 0.05. 3 Of course, standard error bars as presented here should be interpreted with caution, especially with data derived from small samples. Even when standard error bars do not overlap, there may be no statistically significant difference. In this example, each bar represents the mean of three points, so testing the significance of the individual meetings was not possible. 2 Acknowledgements This work was funded in part by the Macarthur Foundation and by the National Science Foundation through grants DUE-1225885, DRL-0918409, DRL-0946372, DUE-0919347, EEC-0938517 and REC-0347000. © 2014 John Wiley & Sons Ltd 619 References Bagley, E. A., & Shaffer, D. W. (2009). When people get in the way: Promoting civic thinking through epistemic game play. International Journal of Gaming and ComputerMediated Simulations, 1(1), 36–52. Bagley, E. A., & Shaffer, D. W. (2011). Promoting civic thinking through epistemic game play. In R. Ferdig (Ed.), Discoveries in gaming and computer-mediated simulations: New interdisciplinary applications (pp. 111–127). Hershey, PA: IGI Global. Beckett, K. L., & Shaffer, D. W. (2005). Augmented by reality: The pedagogical praxis of urban planning as a pathway to ecological thinking. Journal of Educational Computing Research, 33(1), 31–52. Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), 775–786. Bierema, L. L., & Merriam, S. B. (2002). E-mentoring: Using computer mediated communication to enhance the mentoring process. Innovative Higher Education, 26(3), 211–227. Brennan, S. E., & Lockridge, C. B. (2006). Computermediated communication: A cognitive science approach. In K. Brown (Ed.), ELL2, encyclopedia of language and linguistics (pp. 775–780). Oxford, UK: Elsevier. Busselle, R., & Bilandzic, H. (2008). Fictionality and perceived realism in experiencing stories: A model of narrative comprehension and engagement. Communication Theory, 18(2), 255–280. Clark, D. B., et al. (2009). Rethinking science learning through digital games and simulations: Genres, examples, and evidence. Washington, DC: National Academies Press. De Janasz, S. C., Ensher, E. A., & Heun, C. (2008). Virtual relationships and real benefits: Using e-mentoring to connect business students with practicing managers. Mentoring & Tutoring: Partnership in Learning, 16(4), 394–411. Dondlinger, M. J. (2007). Educational video game design: A review of the literature. Journal of Applied Educational Technology, 4(1), 21–31. Ensher, E. A., Heun, C., & Blanchard, A. (2003). Online mentoring and computer-mediated communication: New directions in research. Journal of Vocational Behavior, 63(2), 264–288. Gee, J. P. (2007a). Good video games and good learning: Collected essays on video games, learning and literacy. New York, NY: Peter Lang. Gee, J. P. (2007b). What video games have to teach us about learning and literacy. New York, NY: Palgrave/ Macmillan. 620 Gee, J. P., & Shaffer, D. W. (2010). Looking where the light is bad: Video games and the future of assessment. Phi Delta Kappa International EDge, 6(1), 3–19. Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public narratives. Journal of Personality and Social Psychology, 79(5), 701–721. Honey, M. A., & Hilton, M. H. (2011). Learning science: Computer games, simulations, and education. Washington, DC: The National Academies Press. Ito, M. (2010a). Hanging out, messing around, and geeking out: Kids living and learning with new media. Cambridge, MA: MIT Press. Ito, M. (2010b). Mobilizing the imagination in everyday play: The case of Japanese media mixes. In S. SonvillaWeiss (Ed.), Mashup cultures (pp. 79–97). Vienna: Springer. Jaffe, R., Moir, E., Swanson, E. & Wheeler, G. (2006). E-mentoring for student success. In C. Dede (Ed.), Online professional development for teachers: Emerging models and methods (pp. 89–116). Cambridge, MA: Harvard Education Press. Klecka, C. L., Cheng, Y.-M., & Clift, R. T. (2004). Exploring the potential of electronic mentoring. Action in Teacher Education, 26(3), 2–9. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. Larson, R. (2006). Positive youth development, willful adolescents, and mentoring. Journal of Community Psychology, 34(6), 677–689. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, MA: Cambridge University Press. Linn, M. C., Gerard, L., Ryoo, K., McElhaney, K., Liu, O. L., & Rafferty, A. N. (2014). Computer-guided inquiry to improve science learning. Science, 344(6180), 155– 156. Lynch, C., Ashley, K. D., Pinkwart, N., & Aleven, V. (2009). Concepts, structures, and goals: Redefining illdefinedness. International Journal of Artificial Intelligence in Education, 19(3), 253–266. Marouda-Chatjoulis, A., & Humphreys, P. (1997). Modelling the process of deciding in real world problems. In F. A. Stowell, et al. (Eds.), Systems for sustainability: People, organizations, and environments (pp. 141–146). New York, NY: Springer. Miller, H., & Griffiths, M. (2005). E-mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 300–313). Thousand Oaks, CA: Sage Publications. Morgan, B., Keshtkar, F., Graesser, A., & Shaffer, D. W. (2013). Automating the mentor in a serious game: A dis- E.A. Bagley & D.W. Shaffer course analysis using finite state machines. International Conference on Human-Computer Interaction, Las Vegas, NV. Nash, P., & Shaffer, D. W. (2013). Epistemic trajectories: Mentoring in a game design practicum. Instructional Science, 41(4), 745–771. Pennebaker, J. W., Francis, M. E., & Booth, R. J. (2007). Linguistic inquiry and word count. Austin, TX: LIWC. Phillips, V., & Popović, Z. (2012). More than child’s play: Games have potential learning and assessment tools. Phi Delta Kappan, 94(2), 26–30. Resnick, M. (1994). Turtles, termites, and traffic jams: Explorations in massively parallel microworlds. Cambridge, MA: MIT Press. Rupp, A. A., Gustha, M., Mislevy, R. & Shaffer, D. W. (2010). Evidence-centered design of epistemic games: Measurement principles for complex learning environments. Journal of Technology, Learning and Assessment, 8(4), 4–47. Rupp, A. A., Sweet, S., & Choi, Y. (2010). Modeling learning trajectories with epistemic network analysis: A simulation-based investigation of a novel analytic method for epistemic games. Educational Data Mining Conference, Pittsburgh, PA. Schneider, S. J., Kerwin, J., Frechtling, J., & Vivari, B. A. (2002). Characteristics of the discussion in online and face-to-face focus groups. Social Science Computer Review, 20(1), 31–42. Schön, D. A. (1983). The reflective practitioner: How professionals think in action. New York, NY: Basic Books. Schön, D. A. (1987). Educating the reflective practitioner. San Francisco, CA: Jossey-Bass. Shaffer, D. W. (2003). When Dewey met Schön: Computersupported learning through professional practices. World Conference on Educational Media, Hypermedia, and Telecommunications, Honolulu, HI. Shaffer, D. W. (2006). Epistemic frames for epistemic games. Computers and Education, 46(3), 223–234. Shaffer, D. W. (2007). How computer games help children learn. New York, NY: Palgrave Macmillan. Shaffer, D. W. (2009). Wag the kennel: Games, frames and the problem of assessment. In R. Fertig (Ed.), Handbook of research on effective electronic gaming in education (pp. 577–592). Hershey, PA: IGI Global. Shaffer, D. W., & Gee, J. P. (2012). The right kind of GATE: Computer games and the future of assessment. In M. C. Mayrath, et al. (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 211–228). Charlotte, NC: Information Age Publications. © 2014 John Wiley & Sons Ltd Virtual and face-to-face mentoring Shaffer, D. W., & Graesser, A. (2010). Using a quantitative model of participation in a community of practice to direct automated mentoring in an ill-formed domain. Intelligent Tutoring Systems Conference, Pittsburgh, PA. Shaffer, D. W., & Serlin, R. (2004). What good are statistics that don’t generalize? Educational Researcher, 33(9), 14–25. Shaffer, D. W., Hatfield, D. L., Svarovsky, G. N., Nash, P., Nulty, A., Bagley, E. A., . . . Frank, K. (2009). Epistemic network analysis: A prototype for 21st century assessment of learning. International Journal of Learning and Media, 1(1), 1–21. Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. Computer Games and Instruction, 55(2), 503–524. Shute, V., & Ventura, M. (2013). Stealth assessment: Measuring and supporting learning in video games. Cambridge, MA: MIT Press. Squire, K. D. (2005). Changing the game: What happens when video games enter the classroom. Innovate: Journal of Online Education, 1(6). Retrieved from http://www .editlib.org/p/107270/ Squire, K. D. (2011). Video games and learning: Teaching and participatory culture in the digital age. New York, NY: Teachers College Press. 621 Tannen, D. (1982). Oral and literate strategies in spoken and written narratives. Language, 58(1), 1–21. Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research, 34(3), 229– 243. Voss, J. F. (2014). On the solving of ill-structured problems. In M. T. H. Chi, R. Glaser, & M. J. Farr (Eds.), The nature of expertise (pp. 261–286). Hillsdale, NJ: Lawrence Erlbaum Associates. Whittaker, S. (2003). Theories and methods in mediated communication. In A. C. Graesser, M. A. Gernsbacher, & S. R. Goldman (Eds.), The handbook of discourse processes (pp. 243–286). Mahwah, NJ: Erlbaum. Williamson, D. M., Bauer, M., Steinberg, L. S., Mislevy, R. J., Behrens, J. T., & DeMark, S. F. (2004). Design rationale for a complex performance assessment. International Journal of Testing, 4(4), 303–332. Wilson, K. A., Bedwell, W. L., Lazzara, E. H., Salas, E., Burke, C. S., Estock, J. L., . . . Conkey, C. (2009). Relationships between game attributes and learning outcomes: Review and research proposals. Simulation & Gaming, 40(2), 217–266. Appendix I The Urban Science Coding Scheme Including the Code Label, Description and Examples for the 21 Codes Used to Code the Matched-Pair Interview Question, Final Proposals and Reflection Meeting Discourse Code Description Example E1: Justification considers and describes stakeholders with voices Using people’s concerns (sometimes conflicting) to justify a decision as a planner would (e.g., a compromise or a resolution) E2: Justification considers and describes stakeholders without voices Using the concerns/needs of environmental stakeholders as a planner would (e.g., needs of animals, plants, habitat, water or air quality). Using the concerns/needs of future generations as a planner would. Using objective data (not stakeholder opinions) to justify a decision as a planner would . . .But it’s also really important for us to try to meet everybody’s needs and from what I heard from just these two different groups, you guys have some pretty different needs, right? So we have people who want to really preserve greenspace and people who want to develop and have more housing and more things like that, so we’re going to have to come up with some compromises, right? I do not think the amusement park should build on this wetland even if they will create a new one elsewhere. A new man-made wetland would lack the complex interactions and relationships existing in the current wetland. . .I do not think a created wetland could suffice to cover the damage to the inhabitants and surrounding habitats caused by the destruction of the original wetland. By reducing the number of factories and increasing the number of wildlife sanctuaries, both the CCl4 and NO3 levels should decrease E3: Justification considers and describes decisions using objective data (not stakeholder opinions) V1: Serving the public interest Seeing one’s job and/or responsibility as representing the concerns and meeting the needs of others V2: Multiple perspectives Seeing one’s job and/or responsibility as taking into account different residents’ preferences and/or perspectives about a site. Seeing one’s job and/or responsibility as being aware of/being able to identify bias (personal and stakeholders’ bias). V3: Environmental concerns Seeing one’s job and/or responsibility as representing environmental concerns Numbers, even if they are present without any words S1a: Explicit use of data © 2014 John Wiley & Sons Ltd Well, one issue is how are the changes going to affect the people and also the wildlife living in the city? Most people would feel like that’s an important thing to keep in the back of your mind. Do you think your stakeholders will approve? Saeed is having difficulty selling houses due to the lack of jobs. He has suggested that we increase job opportunities in the area. Gabe reports that the total number of sales in businesses are down, making it hard to start new businesses. Having more people visit should help increase sales. Natalie says that the levels of nitrates and carbon tetrachloride are above acceptable levels, but we could safely change the limits. . . They should not be allowed to harmfully affect the lives of others and the cleanliness of the environment I want 50 more housing units. 2286. E.A. Bagley & D.W. Shaffer 622 Appendix I Continued Code Description Example S1b: Implicit use of data More/less, acceptable/unacceptable, higher/lower (even if the term is by itself) S1c: Information, data, research Explicitly refers to information, data or research Explicitly mentions a source of data I want more housing units. Higher. I decreased housing. I need more information/data. Look at the graphs. I listened to stakeholder feedback. I learned from the virtual site visit that. . . iPlan. I believe that this will allow the character index to go up after a period of time by allowing new people to come into the area. That is why I have left the current character index untouched. S1d: Data source S2: Hypothesis generation and testing S3: Identifying goals S4: Justifying recommendations Ability to hypothesize projected impacts and trade-offs of multiple scenarios. Ability to test hypotheses (e.g., social and environmental) in a closed environment (using iPlan). Ability to identify stakeholders’ goals for the site including using terms like unacceptable, acceptable, more, less (most often found in the site assessment, stakeholder assessment). Ability to state the goals the planner was aiming for in a proposed urban plan (most often found in the preference survey, final proposal). Ability to justify specific recommendations and/or action to others. [If players justify why their stakeholders want something, that does not count as S5 because it is not a recommendation the players are making.] S5: Compromise Ability to explicitly mention that a compromise is being/was made K1: Social impact of decisions on communities Identifying and measuring social impacts or issues such as: neighbourhood character index, character index, housing, jobs, sales, traffic. Identifying stakeholders/stakeholder groups including: stakeholder, stakeholder group, People for Greenspace, Madison Developers’ Consortium, Northside Neighbors, Equal Opportunities for All, specific stakeholder’s names. Identifying and measuring environmental impacts or issues such as: sandhill crane, nesting sites, carbon tetrachloride, CCl4, nitrates, NO3, greenspace, water quality, water run-off, run-off, ppb, ppm, marshes, air quality, habitat quality, habitat Ability to identify and/or describe the possible consequences and/or trade-offs of hypotheses and/or decisions [the trade-offs can be social, environmental or socio-economic]. Ability to discuss constraints of the model (iPlan) or the planning process. Virtual site visit, site assessment, preference survey, iPlan, target identification matrix, matrix, TIM, stakeholder assessment, final proposal, recommendations, justifications, limitations, map, target, professional resource, request for proposals, final plan, plan (if used as a noun), urbanization Land use, land use code, zoning, parcel. R1, R2, R3, R4, single family, duplex, multi-family. C1, C1-R3, C1-R4, C2, retail, office. M1, M2, manufacturing, industry, factory. OS, OS-R, OS-W, open space, open space recreational, wetland Planner, Company, UDA, Urban Design Associates. Internship, intern, staff players’ typed staff pages are coded for I2 K2: Environmental impact of decisions on communities K3: Interconnectedness K4: Following an existing process or strategy K5: Knowledge of land use codes I1: Planner identity I2: Intern identity My goals in this proposal were the following: to increase the crane nesting sites – to increase the water quality – to have minimal traffic – to have a high sale ($$$) – to have a good neighbourhood character 75 is acceptable. They want more [where we assume ‘they’ refers to stakeholders] I need the number of crane nesting sites to be higher. By increasing the amount of housing and jobs with retail areas, people can open business and also move into the area. This will bring in new individuals into the area which allows for the areas growth in terms of diversity. I believe that this will allow the character index to go up after a period of time by allowing new people to come into the area. That is why I have left the current character index untouched. I believe that I can improve on my judgement when creating city plans in which I need to compromise with other groups in order to satisfy the needs of everyone. I think that this time, I was more biased towards being more business and industrial and I think ignored people who wanted more greenspace. . . By increasing the amount of housing and jobs with retail areas, people can open business and also move into the area. This will bring in new individuals into the area which allows for the areas growth in terms of diversity. I believe that this will allow the character index to go up after a period of time by allowing new people to come into the area. That is why I have left the current character index untouched. Natalie says that the levels of nitrates and carbon tetrachloride are above acceptable levels, but we could safely change the limits Cities and people affect their surroundings and almost everything they do. The pollution that cities and factories bring, as well as the cars that people are driving. The urbanization takes away from coastal areas, natural forest and many other environments. I learned from this experience that a city planner must take into consideration a lot of opinions including their own. I did not know about such pressures before. Also, I learned about the multistep process planners go through to plan a city from asking for opinions all the way until proposing a final plan. This gives me new appreciation for the work of people which have planned any city I go to. Changing the wetlands to recreational space, so that there are less cranes and more leisure space for parks and such. Changing R1 into R3 so that there are more houses within each other and more surrounding space. Changing M2 into C1 or C2 so that there is more retail and offices. So let me rephrase a little bit what it sounds like we’re saying, so, being a planner you have to do a bunch of things. . . I wrote it on my staff page. . . .it’s really helpful for Maggie to know how your internship is going. . . © 2014 John Wiley & Sons Ltd
© Copyright 2026 Paperzz