RESEARCH AND TEACHING Aligning Assessment to Instruction: Collaborative Group Testing in LargeEnrollment Science Classes By Marcelle A. Siegel, Tina M. Roberts, Sharyn K. Freyermuth, Stephen B. Witzig, and Kemal Izci We describe a collaborative group-testing strategy implemented and studied in undergraduate science classes. This project investigated how the assessment strategy relates to student performance and perceptions about collaboration and focused on two sections of an undergraduate biotechnology course taught in separate semesters. We compared scores of 115 students on paired individual and group test questions of related concepts during two course exams. Interviews were conducted with students (n = 9) to identify perceptions, and interviews were conducted with faculty (n = 6) to explore instructors’ reactions to the assessment strategy. Interviewed instructors included both those directly involved in the biotechnology course and others at institutions that trialed the collaborative group-testing strategy within the context of their science courses. Findings showed statistically significant higher performance on the group portion of the test, implemented first, versus the individual portion of the test, implemented second. Students reported that the collaborative assessment strategy helped to (a) stimulate thinking, (b) build ideas, (c) improve engagement, and (d) reduce test anxiety. Faculty reported that (a) students learned through the process, (b) they gained an appreciation of students’ collaborative skills, and (d) they found the strategy particularly helpful for students with diverse abilities. College instructors who use collaboration in the classroom should explore group assessment strategies to improve alignment between instruction and assessment. C ollaboration is an essential aspect of the scientific enterprise. It is required to build on theories and advance science, to monitor and deliver quality research, to share expensive resources and instrument development, and to expand opportunities for transdisciplinary innovations and professional development. Working together is seen as a 21st-century skill that all professionals, especially scientists, require. Moreover, collaborative group work is important to science teaching to help stu- 74 Journal of College Science Teaching dents learn skills they will need for future jobs. Instructors also aim to design effective learning environments where students gain motivation and knowledge from each other. Recent science education reform documents recognize the importance of collaboration and encourage it within and among science disciplines (American Association for the Advancement of Science, 2011; National Research Council, 2012). A major component of implementing this reformed view of instruction is developing “skills to participate in diverse working communities, as well as the ability to take full advantage of their collaborators’ multiple perspectives and skills” (American Association for the Advancement of Science, 2011, p. 15). One overlooked aspect of science reform is assessment. Instruction becomes more innovative, but assessment practices often lag behind. One innovative assessment strategy is collaborative group testing. Group testing is defined as a strategy that encourages students to collaborate and learn from each other while completing a test or quiz (Cortright, Collins, Rodenbaugh, & DiCarlo, 2003). It has been shown that group testing (a) enhances learning and increases student retention of course content (Cortright et al., 2003; Eaton, 2009; Sandahl, 2010), (b) provides the opportunity to collaborate and to use critical-thinking skills (Lusk & Conklin, 2003), and (c) provides more positive and collaborative relationships among students while decreasing student test anxiety (Kapitanoff, 2009; Sandahl, 2010). However, although the benefits of group testing are documented in supporting student learning and motivation, it has rarely been used within science fields. Therefore, we incorporated a group-testing strategy into our science and society course to enhance our students’ learning of the course content. To improve our assessment practices for science and society courses, we examined our instruction and asked, “How can we improve our assessment to better align with our instructional practices?” The importance of aligning instruction and assessment is discussed in the National Research Council’s (2001) report, Knowing What Students Know, which describes cognition (how students learn), observation (how students are assessed), and interpretation (how assessments are scored/interpreted). In our largeenrollment, fixed-auditorium-seating classes (that do not include laboratory or discussion sections), we routinely conduct case studies and other forms of activities (such as debates and inquiries with simple lab materials) that require students to work in pairs or groups of four to solve a problem together, consider scientific concepts, and apply the ideas to a real-world decision. Group activities occur at least twice a week and, along with individual activities and mini-lectures, aid comprehension of the subject matter. We have embedded assessments into these activities over time (Rebello, Siegel, Freyermuth, Witzig, & Izci, 2012; Witzig, Freyermuth, Siegel, Izci, & Pires, 2013). We have also developed a biotechnology concept instrument that we use as a formative preassessment to shape our instruction and as a postassessment to gauge student conceptual understanding throughout the course (Witzig et al., 2014). The next frontier was to reconceptualize our testing practices. The course was developed to have three traditional exams and an optional final. In this article, our purpose is to describe a collaborative group-testing strategy that we incorporated for a portion of the exams. We detail how it was implemented in courses at the University of Missouri and how faculty and students viewed the benefits as well as learning gains; we also briefly offer perspectives from faculty at three other universities. Implementation of group testing Context Our class context for group testing at the University of Missouri was a Biotechnology and Society course. The course was offered by the Biochemistry Department and is intended for nonbiochemistry majors. Typical enrollments were 120 students per semester and were comprised mainly of students from majors within the College of Agriculture, Food and Natural Resources (with majors such as agriculture, agricultural journalism, hotel and restaurant management, plant sciences, animal sciences). For each of the three course exams, students worked together for the first third of the exam on a collaborative problem, then completed the remainder of the exam on their own. The individual portion of the exam had questions that probed students’ understanding of the concept on the group portion (see Figure 1). Similarly, class time during instruction included both individual and group activities, using a mixture of assigned group seating and open seating. The conceptually linked questions examined for this article typically included open-ended questions on the group portion and a mix of open-ended and closed-ended (e.g., multiple choice) questions on the individual portion. Our partner schools included a state community college, a private liberal arts college, and an out-ofstate research university that have mirrored our assessment strategy in their contexts. These courses included nonmajors biology courses, majors biology courses, and majors chemistry courses. Logistics The students were prepared beginning on the first day of class for group testing because of the environment that we created in the classroom. The class was apportioned in groups through assigned seating, and groups worked together on multiple in-class activities prior to the first test. In one of our classes, the groups were randomly assigned and changed approximately every 2 to 3 weeks. Heterogeneous grouping through random assignment displays fairness and is typical of most group situations students will face in careers (Crowe & Hill, 2006). In the other class, students were purposely assigned to base groups to include a mix of majors and genders. Most activities were done in base groups to encourage students to form a familiar, collaborative team for the semester. Base groups are intentionally formed to enhance a sense of belonging—an important condition for college classrooms (Johnson, Johnson, & Smith, 1998; Smith, 2000). However, seating was open the remainder of the time to accommodate student preferences. For both of the Biotechnology and Society classes, test days involved new, random group assignments so the students did not know beforehand who would be in their testing group. In our auditorium-style classroom, students were given a handout as they came into class to help orient them to their assigned seat. On one side was the list of students with their seat assignments. On the other side was a map of the desks in the classroom with seat numbers and groups Vol. 44, No. 6, 2015 75 RESEARCH AND TEACHING indicated. By the date of the first test, students were accustomed to finding their seats from the map because we practiced it beforehand during other group activities. Finding the correct seat and group did not seem to be a particularly stressful activity for the students, and the instructors and undergraduate teaching assistants were ready to help as necessary. Before the test was distributed, instructors reminded the class about the logistics of the test. The group test was handed out and each student received his or her own copy. Students were given 15 minutes to work together with their group to answer the questions on the test. Each student responded individually in writing. Students could choose to incorporate group-consensus answers for the group-testing questions, or they could develop their own answers if they did not agree with what the group discussed. After 15 minutes, the class was told that there would be no more talking and no more working in groups. At this point, the individual part of the test was distributed. Students kept their group tests and could continue working on them by themselves if they chose. At the end of the 50-minute class, each student turned in both the group test and the individual test. members did not necessarily earn the same grade on the group test as other members of their group. Additionally, to ensure grading consistency, we used scoring guides for the open-ended questions and met as a group to discuss and agree on how to score borderline responses. Grading Impact of group testing Student reactions Although members of the group could assist each other during the test, each individual was accountable for his or her own written responses. The group tests for each group were graded at the same time so the grader was aware of each group member’s answers to the questions. This was to ensure that the grading within the group was consistent. However, each student was graded on his or her own answers. In some cases, the answers were not the same, or some were more complete than others. Group Although initial reactions to group testing can be skeptical, once students have the opportunity to try it, they become overwhelmingly positive. Students perceived more benefits than drawbacks to collaborative group testing, as discussed next. We found this previously on surveys of one cohort of 115 students (reported in Roberts et al., 2011) as well as from the data reported here from an additional cohort—interviews with 9 participants. Students were sorted into three performance levels on the basis of scores from the first and TABLE 1 Representative student responses to group testing. 76 Stimulates thinking Building on ideas Engagement Test anxiety “On this last test I almost had a mental block and it was nice to start out with having people almost just get me going. You can bounce ideas off of other people.” “If you are in a group then you can hear other people’s ideas. And if you think someone is right then you can bring up their point about it and trigger someone else’s memory.” “I’ve never been in a group assignment or group testing situation where everybody didn’t have an idea, or an opinion, or an answer in the group.” “It helps because if I can hear it and talk to the other person in that first ten minutes then that kind of ultimately helps me with the rest of the test. . . . It kind of gets you thinking about the subject.” “It was interesting how the thought processes varied within our small group.” “Yes, we all contributed ideas, which helped to better understand the questions asked, and helped form an answer.” “I felt everyone contributed. It wasn’t necessarily an equal contribution, but everyone did help in some way.” “I think that we all knew the answers already, so it was just us reassuring each other that we put everything down that was necessary.” “I knew much of what was being asked of us but being able to bounce it off my group members allowed me to construct my answers in a more systematic way.” “Each group member was able to put their input in and then listen to the other person’s input to collaborate for a final answer.” “It was really good. I think it was a little hard to hear because it was so many people talking at one time. ” “I am a poor test taker. Having a group-testing setting calms me down and allows me to think out loud.” Journal of College Science Teaching second exams. Students from each performance level were randomly selected for a 60-minute semistructured interview as a result of their scores and included high- (n = 2), average- (n = 4), and low-scoring students (n = 3). These three groups also included both science and nonscience major students. After transcribing the interviews, themes were developed by analytic induction with consideration of differences between achievement groups (Patton, 2002). We identified four specific benefits that students discussed: stimulating thinking, building ideas, improving engagement, and reducing test anxiety (Table 1). Students found that working in a group helped them when they were stuck and stimulated thinking by allowing them to bounce ideas off other students. Second, students reported that the group-testing format helped them to build understanding by clarifying their thoughts and learning from each other. This was also supported by Likert surveys: Most students (85%) said that the group discussions enhanced understanding and helped them answer the questions on the group portion of the test (Roberts et al., 2011). Third, we discovered that students found the test more engaging than an individual test. They enjoyed the interaction and recommended group testing for their other classes. A final benefit students discussed was reducing test anxiety. Students reported that group testing relaxed them before taking the individual portion of the test. They appreciated being able to receive peer feedback, and it increased their confidence before the remainder of the test. This is useful for faculty to know, as those instructors who might not want to use group testing for a grade might be interested in having a warm-up period before the test in which students discuss questions in groups. Overall, once students completed the group test twice, they recognized benefits for their learning and motivation. One student summarized that it was “almost like a little bitty study session before you take the test.” We also found two main drawbacks that students identified. First, students recognized that some groups were stronger than others in terms of knowledge and collaborations. For example, referring to one of his groups, a student stated, “We didn’t help each other very much.” Another pattern of comments was that there was a lesser advantage for students who already knew the material. Student performance We examined scores of 115 students on paired individual and group test FIGURE 1 Example of matched group and individual assessment questions from Exam 2. Group test: Frederick Griffith accidentally discovered transformation when attempting to develop a vaccine for pneumonia. He injected mice with samples from S‐strain (smooth, virulent) and/or R‐strain (rough, nonvirulent) pneumococci bacteria (Streptococcus pneumoniae). Which of the following results is consistent with Griffith’s experiments? _____ A) injected S‐strain; mouse lives. _____ B) injected R‐strain; mouse dies. _____ C) injected heat‐killed S‐strain; mouse lives. _____ D) injected mixture of heat‐killed S‐strain and live R‐strain; mouse lives. _____ E) injected mixture of heat‐killed R‐strain and live S‐strain; mouse lives. How do the results of the Griffith experiment illustrate whether or not DNA is alive? How do the results of the Griffith experiment relate to the information flow within an organism that is shown by the central dogma? Individual test: The Griffith experiment was important because it showed: a) That heat does not kill bacteria. b) That when a cell dies, the genetic information in it dies. c) That genetic information is retained outside of a living cell. d) That dead bacteria can come back to life if they are injected into a live animal. The central dogma explains: a) the theory of evolution b) cell theory c) the importance of both structure and function in biochemistry d) the flow of genetic information within and between cells e) all of the above True or False: ______ If a cell dies, the DNA in it loses its information. Briefly explain: Vol. 44, No. 6, 2015 77 RESEARCH AND TEACHING questions of related concepts during the second and third course exams. The exams took place during one se- mester of one course (we planned to include Exam 1 also, but the questions were not linked conceptually FIGURE 2 Mean group and individual scores of students (+/-) SEM on conceptually related questions on Exam 2. Differences between group and individual scores were not significant for high-performing students (N = 38) but were statistically significant for medianperforming (N = 39) and low-performing (N = 34) students (p < .01). FIGURE 3 Mean group and individual scores of students (+/-) SEM on conceptually related questions on Exam 3. Differences between group and individual scores were significant for high-performing (N = 38, p < .05), median-performing (N = 39, p < .01), and low-performing students (N = 34, p < .01). 78 Journal of College Science Teaching so we did not include them in the data analysis; we subsequently improved wording of linked questions on Exams 2 and 3). We found that student test averages for conceptually related items were higher on the group portion of the exam than on the individual portion. Student scores for the group portion of Exams 2 and 3 averaged (±SEM) 90% (±1.67%) and 83.32% (±1.30%), respectively, and student scores for the individual portion of Exams 2 and 3 averaged (±SEM) 73.18% (±1.33%) and 68.81% (±1.48%), respectively. This is similar to other studies (Eaton, 2009), yet the more important question is if the group portion of the exam is educative and if it helps all levels of students or only low performers. One could expect only low performers to benefit from collaboration because they would learn from the more knowledgeable students. However, another hypothesis is that all students gain, perhaps because the group discussion clarifies understanding for high performers or because the best way to learn something well is to teach it to another. Group and individual conceptually related questions were scored for Exams 2 and 3, and student data were separated into three categories on the basis of the average score of the three course exams: high performers (80%–100%, N = 38), median performers (70%–80%, N = 39) , and low performers (<70%, N = 34). An example of group and individual questions that are conceptually related is shown in Figure 1. T-tests were performed using Excel to determine whether statistically significant differences were observed in student performance on the group and individually answered questions on Exam 2 (Figure 2) and Exam 3 (Figure 3). Our analysis of Exam 2 showed that high-performing students did not score significantly differently on the two portions of the exam, but the median- and low-performers’ scores were significantly different (p < .01). Analysis of Exam 3 showed that the high-performers’ scores were significantly different (p < .05), and the median- and low-performers scores were significantly different (p < .01). For Exam 2, it appears that the concepts were well enough understood that the high performers did not need the group interaction to answer the related questions. The data (shown in Figure 2) suggests that median students in particular benefited from the group interaction, as the group scores were significantly higher than individual scores on the related questions. The low performers also received a significant benefit on the group portion but still showed a lack of understanding on that portion and were not able to apply the knowledge on the individual portion of the exam. Overall, the analysis shows a relative difference of scores based on student level. All groups could gain from the group portion of the test, but we found that the low- and median-performers’ gains were significant on Exam 2. For Exam 3, the average scores dropped for all three groups. The high- and median-performers’ scores were lower on both portions of the exams. However, the low-performing group scores were slightly higher (65% compared with 61%), whereas the scores for the individual portion were much lower (23% compared with 42%), resulting in an overall decrease. On the basis of discussions with long-term instructors of the course, we know that the third exam was typically the most difficult and often had the lowest scores. We thus believe the reason for the drop was that the content was more challenging to all students. Alternatively, if students were relying on group scores to improve their overall score, we would have expected a drop in scores earlier in the semester. A control group would help provide clarity for this statement, but all sections of this course (only one per semester) are assessed in this way; it would require a substantial shift in teaching and assessment philosophy to create such a section, especially when we feel that it would be a disservice to student learning. On Exam 3, all levels of students appeared to gain understanding by working together in groups but showed an incomplete understanding on the subsequent individual portion of the exam. The gap between the level of understanding that appeared to be present in the group portion and the individual portion of the exam widened as the level of performance decreased, suggesting that low-performing students gained more benefit on the group portion of the exam. Overall, the data suggest that for exams that might be conceptually easier, low- and median-performing students gain a benefit from group testing, but high-performing students do not see the same benefits. Conversely, when the exam concepts are challenging for all students, then everyone, including high performers, appear to gain from the group interaction. We would also like to see improved responses on the individual conceptually linked questions, based on learning during collaboration. The group-test effect implies that although TABLE 2 Faculty perceptions of group testing. Faculty comment Type of institution “I think the group testing is a very educative activity since the kids clean up little things by talking to each other. Especially the kids involved at the center of this activity gained more.” Research university “Group testing helps students learn. That’s why I keep doing it.” Research university “I used group testing for the first time. The whole test was collaborating with a group. The students loved it. I could see them helping each other and light bulbs going off. I will definitely try this again.” Liberal arts college “It lets you know who is a team player and who isn’t cooperative. Team work is a really important attribute in the real world that I instill in my students to prepare them.” Community college “I have some ADA [diverse abilities] students, and it definitely makes them more comfortable.” Community college “My ADA [diverse abilities] students do not want to take the test with Disability Services because they would rather have peers to discuss the content with in a group.” Research university “I definitely believe group testing is great but it takes time and space and I do not know how to do that well.” Research university Vol. 44, No. 6, 2015 79 RESEARCH AND TEACHING collaboration among students aids learning, individuals need further support to fully understand and express the concepts. Future research could focus on additional ways to examine student learning beyond test performance. Faculty reactions Instructors at our university and three institutions have carried out the group-testing strategy described in this article. We collected their reactions to the strategy through informal interviews (that were either audiotaped and transcribed or quoted as they spoke). We interviewed six faculty members who implemented the group-testing strategy at least one time. Representative quotes are shown in Table 2. We were not probing particular issues, but we noticed three themes that could be explored through further research: (a) instructors thought students learned through the process and were supportive of it, (b) instructors gained an appreciation of students’ collaborative skills, and (c) instructors found the strategy helpful for students with diverse abilities. Faculty also mentioned the need for more resources to support them in learning to implement this strategy in their courses. Discussion Although collaborative testing is growing in professional and clinical sciences (e.g., Cortright et al., 2003; Giuliodori, Lujan, & DiCarlo, 2008), little change has been accomplished in science fields. Group-testing innovations are also rare in undergraduate courses, especially large-enrollment courses. Our partners at our institution and others have implemented group testing in a variety of contexts, from a small community college and liberal arts classrooms to large, diverse universities. Different structures for group testing have certain advantages and disadvantages. On the basis of our experiences, we briefly offer ideas in Table 3 to encourage exploration of these strategies. To help with implementation of group testing in your context, we have found the following items to be helpful to consider. First decide how the groups are going to be determined (by the students or by the instructor). To prepare the class for group testing, particularly in large classes, it is helpful to get the students used to finding their assigned seats and working in groups on class activities. While in groups, have TABLE 3 Examples of variations of group testing. 80 Variations Potential benefits Divide students into multiability groups. Use of varied groups encourages student learning and promotes interdependence. Reduces problems from students choosing friends (such as students feel compelled to team with certain people, students do not have a group, etc.). Students take the individual test first, then the group test. Provide an incentive for peer learning, such as a bonus point if you are correct on Part 1, but wrong on Part 2. This “jackpot effect” (Eaton, 2009) removes any potential penalty from knowing the right answer, but being dissuaded by the later group discussion. Group is replaced with open discussion with anyone. Students are not stuck with one group, but they can move around the classroom and discuss with anyone they like, thus perhaps enhancing learning. (Might be better suited to smaller classes or those without fixed seating.) Students answer on one group test. Group must come to consensus. Less scoring time for teacher. After the group portion of the test, instructor joins the discussion and provides feedback before the individual portion. Students receive immediate feedback to improve their understanding and confidence in responses, potentially enhancing learning. The first half of the test allows students to work in any manner they prefer: using books, internet, group discussion. Next, instructors collect papers, and the students take the last half of the test individually. The idea is that this aids a particular group of students who will have a remarkable learning experience, while the ones who know it well or are lost will not benefit much. Journal of College Science Teaching the students do one or more sample group quizzes as class assignments before the first test. When preparing the test, make sure the questions align with group activities performed earlier in class. It can be helpful to incorporate at least one question on the individual portion of the test that probes similar concepts as the grouptest questions. During the test, keep track of time and make sure to stop the group-testing portion of the class so that students have enough time to complete the individual portion of the test. Our project may serve as an example of considering assessment during course design and aligning learning goals, instructional activities, and assessment tasks. Further, assessment can act as a useful driver of course reform. For example, Wiggins and McTighe (2006) explained that good design begins with choosing assessments that will display the enduring understanding students have developed. Once assessments are determined, instruction can be planned to build understandings toward that end goal. The evidence provided here indicates that collaborative group testing is a useful assessment strategy to improve student skills such as learning to work collaboratively, stimulating student thinking and engagement, and decreasing test anxiety. These can all be achieved with minimal classroom disruption if properly planned and implemented throughout the semester. Students in our study, particularly the medianand low-performing students, scored better on the group portion of the test than on the conceptually related individual questions. Although Eaton (2009) found that group testing after individual testing on multiple-choice exams enhanced all students’ perfor- mance on the group portion, even when divided into three performance groups (low, middle, high), our design focused on the same conceptual questions on both exams, with the group portion first. Our data suggest that if the topic is challenging, the group strategy will be significantly helpful for all levels of students, but if it is not, the lower performing students still benefit. Our findings to date, along with the findings of others who have used similar strategies, indicate the need for additional research to refine our understanding of collaborative group testing and how it can be used to increase student performance and, more important, enhance student learning. ■ Acknowledgments We are grateful to student and faculty participants, as well as the members of the DIAL-B research group, particularly Carina Rebello. This material is based on work supported by the National Science Foundation under Grant No. 0837021. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. References American Association for the Advancement of Science. (2011). Vision and change in introductory biology education: A call to action. Washington, DC: Author. Cortright, R. N., Collins, H. L., Rodenbaugh, D. W., & DiCarlo, S. E. (2003). Student retention of course content is improved by course collaborative group testing. Advanced Physiology Education, 24, 102–108. Crowe, M., & Hill, C. (2006). Setting the stage for good group dynamics in semester-long projects in the sciences. Journal of College Science Teaching, 35(4), 32–35. Eaton, T. T. (2009). Engaging students and evaluating learning progress using collaborative exams in introductory courses. Journal of Geoscience Education, 57(2), 113–120. Giuliodori, M. J., Lujan, H. L., & DiCarlo, S. E. (2008). Collaborative group testing benefits high- and lowperforming students. Advances in Physiology Education, 32, 274–278. Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Cooperative learning returns to college: What evidence is there that it works? Change, 30(4), 26–35. Kapitanoff, S. H. (2009). Collaborative testing: Cognitive and interpersonal processes related to enhanced test performance. Active Learning in Higher Education, 10, 56–70. Lusk, M., & Conklin, L. (2003). Collaborative testing to promote learning. Journal of Nursing Education, 42(3), 121–124. National Research Council. (2001). Knowing what students know: The science and design of educational assessment. In J. Pellegrino, N. Chudowsky, & R. Glaser (Eds.), Committee on the foundations of assessment. Washington, DC: National Academies Press. National Research Council. (2012) A framework for K–12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: National Academies Press. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage. Rebello, C. M., Siegel, M. A., Freyermuth S. K., Witzig S. B., & Izci, K. (2012). Development of embedded assessments for learning Vol. 44, No. 6, 2015 81 RESEARCH AND TEACHING in biotechnology: Results and design process for dissemination. Biochemistry and Molecular Biology Education, 40(2), 82–88. Roberts, T. M., Rebello, C. M., Witzig, S. B., Siegel, M. A., Freyermuth, S. K., & Izci, K. (2011). The effect of collaborative group testing on the performance and perceptions of students in a biotechnology course for non-majors. Proceedings of the Annual Meeting of the National Association for Research in Science Teaching, Orlando, FL. Sandahl, S. S. (2010). Collaborative testing as a learning strategy in nursing education. Nursing Education Perspectives, 11, 143–147. Smith, K. A. (2000). Going deeper: Formal small-group learning in large classes. New Directions for Teaching and Learning, 2000(81), 25–46. Wiggins, G., & McTighe, J. (2006). Examining the teaching life. Educational Leadership, 63(6), 26–29. Witzig, S. B., Freyermuth, S. K., Siegel, M. A., Izci, K., & Pires, J. C. (2013). Is DNA alive? A study of conceptual change through targeted instruction. Research in Science Education, 43, 1361–1375. Witzig, S. B., Rebello, C. M., Siegel, M. A., Freyermuth, S. K., Izci, K., & McClure, B. A. (2014). Building the BIKE: Development and testing of the Biotechnology Instrument for Knowledge Elicitation (BIKE). Research in Science Education, 44, 675–698. Ready for Your Next Move? Free online job search NSTA CAREER CENTER Job search agent CAREER ADVANCEMENT MADE EASY The NSTA Career Center is the ideal place to be seen by employers who are specifically looking for science teaching professionals. The NSTA Career Center makes finding the perfect job easy. http://careers.nsta.org 82 Marcelle A. Siegel (siegelm@missouri. edu) is an associate professor in the Departments of Learning, Teaching & Curriculum and Biochemistry; Tina M. Roberts is an instructor and NEP laboratory supervisor in the Department of Nutrition & Exercise Physiology; and Sharyn K. Freyermuth is an associate teaching professor in the Department of Biochemistry, all at the University of Missouri in Columbia. Stephen B. Witzig is an assistant professor in the Department of STEM Education & Teacher Development at the University of Massachusetts Dartmouth. Kemal Izci is an assistant professor in the Department of Educational Sciences, Eregli College of Education, Necmettin Erbakan University in Konya, Turkey. Journal of College Science Teaching All job-seeker functions are available at no charge. Confidential resume posting Make your resume available to employers, and release your contact information only when you are ready. Create a password-protected account and receive automatic email notification of new jobs that match your search criteria. Saved jobs capability Save up to 100 jobs to a folder in your account so you come back to apply when you are ready.
© Copyright 2026 Paperzz