DOI QR코드

DOI QR Code

A Validation Study of Retrospective Pre-post Testin the Affective Domain in Science Learning:for Scientifically Gifted Elementary Students

과학학습의 정의적 영역에서 사전-사후 통합 검사 설계의 타당화 연구: 과학영재를 대상으로

  • Received : 2017.08.02
  • Accepted : 2017.08.25
  • Published : 2017.08.31

Abstract

In this study, the reliability and validity of the retrospective pre-post test were analyzed in order to solve the problem of traditional pre-post test including response shift bias. Samples of the study were 162 elementary school students who are studying at the S university gifted education center in Seoul. Before completion of the field trip, we conducted pre test of science-related attitudes. After completion of the field trip, respondents were asked to compare their responses of pre and post science-related attitudes to quantitatively analyze the commonalities and differences of the two tests. To find out more characteristics, qualitative data such as daily records and interview were also gathered and analyzed. The major results of the study are as follows. First, for the paired t-test, there was no statistically significant difference between separate pre-test scores and retrospective pre-test. There was a very high correlation between the separate pre-test scores and the retrospective pre-test. Second, there were significant differences in all seven sub-factors of science-related attitudes between the retrospective pre-test and the post-test. Third, the separate pre-test scores showed a slightly higher tendency than the retrospective pre-test scores. This suggests that the response shift bias appears when it is performed the separate pre-test in affective domain. As a result of the interview, it was found that the evaluation standards of separate pre-test did not match with those of post-test. Forth, internal consistency reliability of the retrospective pre-test was higher than that of the separate pre-test. However, there were significant differences in six factors of science-related attitudes excluding the 'social implications of science' between the separate pre-test and the post-test. Based on these results, the retrospective pre-post test design provides simplicity and convenience to both respondents and investigators, as it is done with one test. The retrospective pre-post test design can be regarded as a valid design for the self-report measurement of affective domain on a single experimental group.

Keywords

References

  1. 성태제 (2002). 타당도와 신뢰도. 서울: 학지사.
  2. 성태제, 시기자 (2014). 연구방법론. 서울: 학지사.
  3. 이학식, 김영 (1997). 연구디자인이 Cronbach's ${\alpha}$ 계수에 미치는 영향. 마케팅연구, 12(1), 209-221.
  4. 한국교육심리학회 (2000). 교육심리학 용어사전. 서울: 학지사.
  5. Anderson, L. W. & Bourke, S. F. (2000). Assessing affective characteristics in the schools (2nd ed.). NJ: Erlbaum.
  6. Campbell, D. T. & Stanley, J. C. (1963). Experimental designs for research on teaching. Handbook of Research on Teaching, 171-246.
  7. Churchill Jr, G. A. & Peter, J. P. (1984). Research design effects on the reliability of rating scales: A meta-analysis. Journal of Marketing Research, 360-375.
  8. Colosi, L. & Dunifon, R. (2006). What's the difference: "Post the pre" & pre then post. Ithaca, NY: Cornell Cooperative Extension.
  9. Cook, T. D., Campbell, D. T. & Day, A. (1979). Quasiexperimentation: Design & analysis issues for field settings (Vol. 351). Boston: Houghton Mifflin.
  10. Creswell, J. W., Plano Clark, V. L., Gutmann, M. L. & Hanson, W. E. (2003). Advanced mixed methods research designs. Handbook of Mixed Methods in Social and Behavioral Research, 209-240.
  11. Cronbach, L. J. & Furby, L. (1970). How we should measure "change': Or should we?. Psychological Bulletin, 74(1), 68-80. https://doi.org/10.1037/h0029382
  12. Davis, G. A. (2003). Using a retrospective pre-post questionnaire to determine program impact. Journal of Extension, 41(4), 1-5.
  13. Dimitrov, D. M. & Rumrill Jr, P. D. (2003). Pretest-posttest designs and measurement of change. Work: Journal of Prevention, Assessment & Rehabilitation, 20(2), 159-165.
  14. Fraenkel, J. R., Wallen, N. E. & Hyun, H. (9th ed. 2014). How to design and evaluate research in education. McGraw-Hill Higher Education.
  15. Fraser, B. J. (1981). TOSRA: Test of science-related attitudes: Handbook. Australian Council for Educational Research.
  16. Gall, M. D., Gall, J. P. & Borg, W. R. (8th ed. 2006). Educational research: An introduction. Pearson Education, Inc.
  17. Henson, R. K. (2001). Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Measurement and Evaluation in Counseling and Development, 34(3), 177-189.
  18. Hill, L. G. & Betz, D. L. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501-517. https://doi.org/10.1177/1098214005281356
  19. Howard, G. S. & Dailey, P. R. (1979). Response-shift bias: A source of contamination of self-report measures. Journal of Applied Psychology, 64(2), 144-150. https://doi.org/10.1037/0021-9010.64.2.144
  20. Lamb, T. (2005). The retrospective pretest: An imperfect but useful tool. The Evaluation Exchange, 11(2), 18-19.
  21. Marino, L., Lilienfeld, S. O., Malamud, R., Nobis, N. & Broglio, R. (2010). Do zoos and aquariums promote attitude change in visitors? A critical evaluation of the American zoo and aquarium study. Society & Animals, 18(2), 126-138. https://doi.org/10.1163/156853010X491980
  22. Marshak, H. H., De Silva, P. & Silberstein, J. (1998). Evaluation of a peer-taught nutrition education program for low-income parents. Journal of Nutrition Education, 30(5), 314-322. https://doi.org/10.1016/S0022-3182(98)70340-1
  23. Peterson, R. A. (1994). A meta-analysis of Cronbach's coefficient alpha. Journal of Consumer Research, 21(2), 381-391. https://doi.org/10.1086/209405
  24. Pratt, C. C., McGuigan, W. M. & Katzev, A. R. (2000). Measuring program outcomes: Using retrospective pretest methodology. American Journal of Evaluation, 21(3), 341-349. https://doi.org/10.1177/109821400002100305
  25. Rockwell, S. K. & Kohn, H. (1989). Post-then-pre evaluation: Measuring behavior change more accurately. Journal of Extension, 27, 19-21.
  26. Rohs, F. R. & Langone, C. A. (1997). Increased accuracy in measuring leadership impacts. Journal of Leadership & Organizational Studies, 4(1), 150-158. https://doi.org/10.1177/107179199700400113
  27. Schwartz, C. E. & Rapkin, B. D. (2004). Reconsidering the psychometrics of quality of life assessment in light of response shift and appraisal. Health and Quality of Life Outcomes, 2(1), 16. https://doi.org/10.1186/1477-7525-2-16
  28. Schwartz, C. E., Sprangers, M. A., Carey, A. & Reed, G. (2004). Exploring response shift in longitudinal data. Psychology & Health, 19(1), 51-69. https://doi.org/10.1080/0887044031000118456
  29. Stevens, G. L. & Lodl, K. A. (1999). Community coalitions: Identifying changes in coalition members as a result of training. Journal of Extension, 37(2), 1-9.

Cited by

  1. 과학 현장 학습이 초등학교 영재 학생들의 과학 관련 태도에 미치는 영향 vol.45, pp.3, 2017, https://doi.org/10.15717/bioedu.2017.45.3.319