DOI QR코드

DOI QR Code

Student Discussion or Expert Example? How to Enhance Peer Assessment Accuracy

동료평가 정확도 향상 방안의 비교: 평가 기준에 대한 학생들 간 토론 대 전문가 평가 사례 제시

  • Park, Jung Ae (Department of Psychology & Institute of Psychological Science, Seoul National University) ;
  • Park, Jooyong (Department of Psychology & Institute of Psychological Science, Seoul National University)
  • 박정애 (서울대학교 심리학과 & 심리과학 연구소) ;
  • 박주용 (서울대학교 심리학과 & 심리과학 연구소)
  • Received : 2019.10.11
  • Accepted : 2019.10.11
  • Published : 2019.12.30

Abstract

Writing is an activity known to enhance higher level thinking. It allows the writer to utilize, apply, and actively expand the acquired knowledge. One way to increase writing activity in classroom setting is to use peer assessment. In this study, we sought to increase the accuracy of peer assessment by having students discuss about the scoring rubric or by referring to an expert's assessment. One hundred and fifty college students participated in the experiment. In the group that referred to the expert's assessment, the accuracy of peer assessment increased when the same piece of writing was evaluated; however, no such increase was observed when another piece of writing was assessed. On the other hand, in the group that discussed about the scoring rubric, the accuracy of peer assessment remained the same when the same piece of writing was evaluated, but increased when another piece of writing was assessed. Also, in the discussion group, the accuracy increased in proportion to the number of comments during the discussion. The results suggest that active and voluntary participation of students increase the accuracy of peer assessment.

고차적 사고를 요구하는 글쓰기는 배운 지식을 활용하고 발전시키게 한다. 학습 장면에서 글쓰기를 더 많이 활용하기 위해서는 동료평가를 도입하는 동시에 동료평가의 정확도를 높일 필요가 있다. 본 연구에서는 동료평가의 정확도를 높이는 방안으로, 평가 기준에 대해 학생들끼리 토론하게 하거나 전문가의 평가를 참고하도록 하는 두 방안을 탐색하였다. 대학생 150명을 대상으로 한 실험 결과, 동료평가 후 전문가의 평가를 참고했던 집단이 평가했던 글을 다시 평가할 때는 평가 정확도가 향상되었지만, 새로운 글을 평가할 때는 향상되지 않았다. 반면에, 동료평가 후 평가 기준에 대하여 토론을 진행했던 집단은 평가했던 글을 다시 평가할 때는 평가 정확도가 향상되지 않았지만, 새로운 글을 평가할 때는 향상됨을 발견하였다. 또한 토론 집단의 경우 평가 기준에 대한 토론에서 총 발언 수가 많아질수록 평가 정확도가 높아졌다. 이상의 결과는 평가 기준에 대한 토론에서 적극적이고 자발적인 논의가 활발할수록 이후 동료평가의 정확도가 향상됨을 시사한다.

Keywords

References

  1. 박주용, & 박정애 (2018). 동료평가의 현황과 전망. 인지과학, 29(2), 85-104. https://doi.org/10.19066/COGSCI.2018.29.2.001
  2. Cheung-Blunden, V., & Khan, S. R. (2018). A modified peer rating system to recognise rating skill as a learning outcome. Assessment & Evaluation in Higher Education, 43(1), 58-67. https://doi.org/10.1080/02602938.2017.1280721
  3. Chi, M. T. (1997). Quantifying qualitative analyses of verbal data: A practical guide. Journal of the Learning Sciences, 6, 271-315. https://doi.org/10.1207/s15327809jls0603_1
  4. Chi, M. T. (2009). Active-constructive-interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1(1), 73-105. https://doi.org/10.1111/j.1756-8765.2008.01005.x
  5. Chi, M. T., Kang, S., & Yaghmourian, D. L. (2017). Why students learn more from dialogue-than monologue-videos: Analyses of peer interactions. Journal of the Learning Sciences, 26(1), 10-50. https://doi.org/10.1080/10508406.2016.1204546
  6. Chi, M. T., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219-243. https://doi.org/10.1080/00461520.2014.965823
  7. Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39(5), 629-643. https://doi.org/10.1007/s11251-010-9146-1
  8. Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(4), 328-338. https://doi.org/10.1016/j.learninstruc.2009.08.006
  9. Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891-901. https://doi.org/10.1037/0022-0663.98.4.891
  10. De Lisi, R., & Golbeck, S. (1999). Implications of Piagetian theory for peer learning. Mahway, NJ: Lawrence Erlbaum.
  11. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322. https://doi.org/10.3102/00346543070003287
  12. Fisher, D., & Frey, N. (2004). Improving adolescent literacy: Strategies at work. Upper Saddle River, NJ:Pearson.
  13. Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397-431. https://doi.org/10.2190/7MQV-X9UJ-C7Q3-NRAG
  14. Hayes, J. R., & Flower, L. (1980). Identifying the Organization of Writing Processes. In L. W. Gregg, & E. R. Steinberg (Eds.), Cognitive Processes in Writing: An Interdisciplinary Approach (pp. 3-30). Hillsdale, NJ: Lawrence Erlbaum.
  15. Hou, H. T., Chang, K. E., & Sung, Y. T. (2007). An analysis of peer assessment online discussions within a course that uses project-based learning. Interactive Learning Environments, 15(3), 237-251. https://doi.org/10.1080/10494820701206974
  16. Jeffery, D., Yankulov, K., Crerar, A., & Ritchie, K. (2016). How to achieve accurate peer assessment for high value written assignments in a senior undergraduate course. Assessment & Evaluation in Higher Education, 41(1), 127-140. https://doi.org/10.1080/02602938.2014.987721
  17. Kaufman, J. H., & Schunn, C. D. (2011). Students’ perceptions about peer assessment for writing: their origin and impact on revision work. Instructional Science, 39(3), 387-406. https://doi.org/10.1007/s11251-010-9133-6
  18. Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung K. S., & Suen, H. K. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245-264. https://doi.org/10.1080/02602938.2014.999746
  19. Liu, X., & Li, L. (2014). Assessment training effects on student assessment skills and task performance in a technology-facilitated peer assessment. Assessment & Evaluation in Higher Education, 39(3), 275-292. https://doi.org/10.1080/02602938.2013.823540
  20. Liu, X., li, L., & Zhang, Z. (2018). Small group discussion as a key component in online assessment training for enhanced student learning in web-based peer assessment. Assessment & Evaluation in Higher Education, 3(2), 207-222.
  21. Marra, R. M., Moore, J. L., & Klimczak, A. K. (2004). Content analysis of online discussion forums: A comparative analysis of protocols. Educational Technology Research and Development, 52(2), 23. https://doi.org/10.1007/BF02504837
  22. Mellers, B., Ungar, L., Baron, J., Ramos, J., Gurcay, B., Fincher, K., Moore, D,, Atanasov, P,, Swift, S. A., Murray, T., Stone, E., and Tetlock, P. E. (2014). Psychological strategies for winning a geopolitical forecasting tournament. Psychological Science, 25(5), 1106-1115. https://doi.org/10.1177/0956797614524255
  23. Moore, J. L., & Marra, R. M. (2005). A comparative analysis of online discussion participation protocols. Journal of Research on Technology in Education, 38(2), 191-212. https://doi.org/10.1080/15391523.2005.10782456
  24. Murphy, P. K., Firetto, C. M., Wei, L., Li, M., & Croninger, R. M. (2016). What really works:Optimizing classroom discussions to promote comprehension and critical-analytic thinking. Policy Insights from the Behavioral and Brain Sciences, 3(1), 27-35. https://doi.org/10.1177/2372732215624215
  25. Newman, D. R., Johnson, C., Webb, B., & Cochrane, C. (1997). Evaluating the quality of learning in computer supported co-operative learning. Journal of the American Society for Information science, 48(6), 484-495. https://doi.org/10.1002/(SICI)1097-4571(199706)48:6<484::AID-ASI2>3.0.CO;2-Q
  26. Newman, D. R., Webb, B., & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to-face and computer supported group learning. Interpersonal Computing and Technology, 3(2), 56-77.
  27. Park, J. (2017). ClassPrep: A peer review system for class preparation. British Journal of Educational Technology, 48, 511-523. https://doi.org/10.1111/bjet.12390
  28. Patchan, M. M., & Schunn, C. D. (2015). Understanding the benefits of providing peer feedback: how students respond to peers’ texts of varying quality. Instructional Science, 43(5), 591-614. https://doi.org/10.1007/s11251-015-9353-x
  29. Reinholz, D. (2016). The assessment cycle: a model for learning through peer assessment. Assessment & Evaluation in Higher Education, 41(2), 301-315. https://doi.org/10.1080/02602938.2015.1008982
  30. Rushton, C., Ramsey, P., & Rada, R. (1993). Peer assessment in a collaborative hypermedia environment:A case study. Journal of Computer-Based Instruction, 20, 75-80.
  31. Russell J., van Horne, S. V., Ward, A. S., Bettis III, E. A., & Gikonyo, J. (2017). Variability in students’ evaluating processes in peer assessment with calibrated peer review. Journal of Computer Assisted Learning, 33, 178-190. https://doi.org/10.1111/jcal.12176
  32. Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merrienboer, J. J. G. (2002). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment & Evaluation in Higher Education, 27(5), 443-454. https://doi.org/10.1080/0260293022000009311
  33. Soter, A. O., Wilkinson, I. A., Murphy, P. K., Rudge, L., Reninger, K., & Edwards, M. (2008). What the discourse tells us: Talk and indicators of high-level comprehension. International Journal of Educational Research, 47(6), 372-391. https://doi.org/10.1016/j.ijer.2009.01.001
  34. Topping, K. (1998). Peer assessment between students in college and universities. Review of Educational Research, 68(3), 249-276. https://doi.org/10.3102/00346543068003249
  35. Tsai, C. C., & Liang, J. C. (2009). The development of science activities via on-line peer assessment: The role of scientific epistemological views. Instructional Science, 37(3), 293-310. https://doi.org/10.1007/s11251-007-9047-0
  36. Van Loon, M. H., Dunlosky, J., Van Gog, T., Van Merrienboer, J. J., & De Bruin, A. B. (2015). Refutations in science texts lead to hypercorrection of misconceptions held with high confidence. Contemporary Educational Psychology, 42, 39-48. https://doi.org/10.1016/j.cedpsych.2015.04.003
  37. Zheng, L. Cui, P., Li, W., & Huang, R. (2018). Synchronous discussion between assessors and assessees in web-based peer assessment: Impact on writing performance, feedback quality, meta-cognitive awareness and self-efficacy. Assessment & Evaluation in Higher Education, 1-15.