Analysing test-takers’ views on a computer-based speaking test

  1. Amengual-Pizarro, Marian
  2. García-Laborda, Jesús
Revista:
Profile: Issues in Teachers' Professional Development

ISSN: 1657-0790 2256-5760

Año de publicación: 2017

Volumen: 19

Número: 1

Páginas: 23-38

Tipo: Artículo

DOI: 10.15446/PROFILE.V19N_SUP1.68447 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

Otras publicaciones en: Profile: Issues in Teachers' Professional Development

Resumen

Este estudio analiza la opinión de los candidatos sobre un examen oral con ordenador para averiguar los aspectos que consideran más relevantes en la evaluación oral a través de las nuevas tecnologías y explorar las principales ventajas y desventajas de este tipo de pruebas comparadas con pruebas orales con evaluadores humanos. Se distribuyó un pequeño cuestionario a 80 candidatos que realizaron el examen oral APTIS en la Universidad de Alcalá en abril de 2016. Los resultados revelan que los candidatos consideran que las pruebas orales con ordenador son válidas y adecuadas para la evaluación de la competencia oral. Curiosamente, los datos demuestran que las características personales de los candidatos juegan un papel primordial en la elección del método de evaluación más idóneo.

Referencias bibliográficas

  • Amengual-Pizarro, M. (2009). Does the English test in the Spanish university entrance examination influence the teaching of English? English Studies, 90(5), 585-598. https://doi.org/10.1080/00138380903181031.
  • Araújo. L. (Ed.). (2010). Computer-based assessment (CBA) of foreign language speaking skills. Luxembourg, LU: Publications Office of the European Union. Retrieved from https://www.testdaf.de/fileadmin/Redakteur/PDF/Forschung-Publikationen/Volume_European_Commission_2010.pdf.
  • Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford, UK: Oxford University Press.
  • Bartram, D. (2006). The internationalization of testing and new models of test delivery on the internet. International Journal of Testing, 6(2), 121-131. https://doi.org/10.1207/s15327574ijt0602_2.
  • Bernstein, J., Van Moere, A., & Cheng, J. (2010). Validating automated speaking tests. Language Testing, 27(3), 355-377. https://doi.org/10.1177/0265532210364404.
  • Bulut, O., & Kan, A. (2012). Application of computerized adaptive testing to entrance examination for graduate students in Turkey. Eurasian Journal of Educational Research, 49, 61-80.
  • Chalhoub-Deville, M. (2003). Second language interaction: Current perspectives and future trends. Language Testing, 20, 369-383. https://doi.org/10.1191/0265532203lt264oa.
  • Chapelle, C. A. (2001). Computer applications in second language acquisition: Foundations for teaching, testing and research. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9781139524681.
  • Chapelle, C. A., & Douglas, D. (2006). Assessing language through computer technology. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9780511733116.
  • Chapelle, C. A., & Voss, E. (2016). 20 years of technology and language assessment in language learning and technology. Language Learning & Technology, 20(2), 116-128. Retrieved from http://llt.msu.edu/issues/june2016/chapellevoss.pdf.
  • Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602. https://doi.org/10.1111/1467-8535.00294.
  • Colwell, N. M. (2013). Test anxiety, computer-adaptive testing and the common core. Journal of Education and Training Studies, 1(2), 50-60. https://doi.org/10.11114/jets.v1i2.101.
  • Davidson, P., & Coombe, C. (2012). Computerized language assessment. In C. Coombe, P. Davidson, B. O’Sullivan, & S. Stoynoff (Eds.), The Cambridge guide to second language assessment (pp. 267-273). Cambridge, UK: Cambridge University Press.
  • Douglas, D., & Hegelheimer, V. (2007). Assessing language using computer technology. Annual Review of Applied Linguistics, 27, 115-132. https://doi.org/10.1017/S0267190508070062.
  • The European Higher Education Area. (1999). The Bologna declaration of 19 June 1999: Joint declaration of the European Ministers of Education. Retrieved from http://www.magna-charta.org/resources/files/BOLOGNA_DECLARATION.pdf.
  • García-Laborda, J. (2007). On the net: Introducing standardized EFL/ESL exams. Language Learning & Technology, 11(2), 3-9.
  • García-Laborda, J., Magal Royo, M. T., & Bakieva, M. (2016). Looking towards the future of language assessment: Usability of tablet pcs in language testing. Journal of Universal Computer Science, 22(1), 114-123.
  • García-Laborda, J., Magal Royo, T., Litzler, M. F., & Giménez López, J. L. (2014). Mobile phones for a university entrance examination language test in Spain. Journal of Educational Technology & Society, 17(2), 17-30.
  • García-Laborda. J., & Martín-Monje. E. (2013). Item and test construct definition for the new Spanish baccalaureate final evaluation: A proposal. International Journal of English Studies, 13(2), 69-88. https://doi.org/10.6018/ijes.13.2.185921.
  • Galaczi, E. D. (2010). Face-to-face and computer-based assessment of speaking: Challenges and opportunities. In L. Araújo (Ed.), Computer-based assessment of foreign language speaking skills (pp. 29-51). Luxembourg, LU: Publications Office of the European Union. Retrieved from https://www.testdaf.de/fileadmin/Redakteur/PDF/Forschung-Publikationen/Volume_European_Commission_2010.pdf.
  • Green, A. (2013). Washback in language assessment. International Journal of English Studies, 13(2), 39-51. https://doi.org/10.6018/ijes.13.2.185891.
  • Harb, J., Abu Bakar, N., & Krish, P. (2014). Gender differences in attitudes towards learning oral skills using technology. Education Information Technology, 19(4), 805-816. https://doi.org/10.1007/s10639-013-9253-0.
  • Jeong, H., Hashizume, H., Sigiura, M., Sassa, Y., Yokoyama, S., Shiozaki, S., & Kawashima, R. (2011). Testing second oral language proficiency in direct and semi-direct settings: A social cognitive neuroscience perspective. Language learning, 61(3), 675-699. https://doi.org/10.1111/j.1467-9922.2011.00635.x.
  • Kenyon, D. M., & Malabonga, V. (2001). Comparing examinee attitudes toward computer-assisted and other oral proficiency assessments. Language Learning & Technology, 5(2), 60-83.
  • Kenyon, D. M., & Malone, M. (2010). Investigating examinee autonomy in a computerized test of oral proficiency. In L. Araújo (Ed.), Computer-based assessment of foreign language speaking skills (pp. 1-27). Luxembourg, LU: Publications Office of the European Union. Retrieved from https://www.testdaf.de/fileadmin/Redakteur/PDF/Forschung-Publikationen/Volume_European_Commission_2010.pdf.
  • Kang, O. (2008). Ratings of L2 oral performance in English: Relative impact of rater characteristics and acoustic measures of accentedness. Spaan Fellow Working Papers in Second or Foreign Language Assessment, 6, 181-205.
  • Kramsch, C. (1986). From language proficiency to interactional competence. The Modern Language Journal, 70(4), 366-372. https://doi.org/10.1111/j.1540-4781.1986.tb05291.x.
  • Lamy, M.-N. (2004). Oral conversations online: Redefining oral competence in synchronous environments. ReCALL, 16(2), 520-538. https://doi.org/10.1017/S095834400400182X.
  • Lee, J. A. (1986). The effects of past computer experience on computer aptitude test performance. Educational and Psychological Measurement, 46, 727-736. https://doi.org/10.1177/0013164486463030.
  • Lee, A. C. K. (2003). Undergraduate students’ gender differences in IT skills and attitudes. Journal of Computer Assisted Learning, 19(4), 488-500. https://doi.org/10.1046/j.0266-4909.2003.00052.x.
  • Lewis, S. (2011). Are communication strategies teachable? Encuentro, 20, 46-54.
  • Litzler, M. F., & García-Laborda, J. (2016). Students’ opinions about ubiquitous delivery of standardized English exams. Porta Linguarum, (Monográfico I), 99-110.
  • Lumley, T., & McNamara, T. F. (1995). Rater characteristics and rater bias: Implications for training. Language testing, 12(1), 54-71. https://doi.org/10.1177/026553229501200104.
  • Luoma, S. (2004). Assessing speaking. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9780511733017.
  • Malabonga, V., Kenyon, D. M., & Carpenter, H. (2005). Self-assessment, preparation and response time on a computerized oral proficiency test. Language Testing, 22(1), 59-92. https://doi.org/10.1191/0265532205lt297oa.
  • May, L. (2009). Co-constructed interaction in a paired speaking test: The rater’s perspective. Language Testing, 26(3), 387-421. https://doi.org/10.1177/0265532209104668.
  • McNamara, T. F. (1997). ‘Interaction’ in second language performance assessment: Whose performance? Applied Linguistics, 18(4), 446-466. https://doi.org/10.1093/applin/18.4.446.
  • Nakatsuhara, F. (2010, April). Interactional competence measured in group oral tests: How do test-talker characteristics, task types and group sizes affect co-constructed discourse in groups. Paper presented at The Language Testing Research Colloquium, Cambridge, United Kingdom.
  • Nazara, S. (2011). Students’ perception on EFL speaking skill development. Journal of English Teaching, 1(1), 28-42.
  • Norris, J. M. (2001). Concerns with computerized adaptive oral proficiency assessment: A commentary on “Comparing examinee attitudes toward computer-assisted and oral proficiency assessments” by Dorry Kenyon and Valerie Malabonga. Language learning & Technology, 5(2), 99-105. Retrieved from http://llt.msu.edu/vol5num2/pdf/norris.pdf.
  • O’Sullivan, B. (2000). Exploring gender and oral proficiency interview performance. System, 28(3), 378-386. https://doi.org/10.1016/S0346-251X(00)00018-X.
  • O’Sullivan, B. (2012). Aptis test development approach (Aptis technical Report, ATR-1). London, UK: British Council. Retrieved from https://www.britishcouncil.org/sites/default/files/aptis-test-dev-approach-report.pdf.
  • O’Sullivan, B., & Weir, C. (2011). Language testing and validation. In B. O’Sullivan (Ed.), Language testing: Theories and practices (pp. 13-32). Oxford, UK: Palgrave.
  • Ockey, G. J. (2009). The effects of group members’ personalities on a test taker’s L2 group oral discussion test scores. Language Testing, 26(2), 161-186. https://doi.org/10.1177/0265532208101005.
  • Pearson. (2009a). Official guide to Pearson test of English academic. London, UK: Author.
  • Pearson. (2009b). Versant Spanish test: Test description and validation summary. Palo Alto, US: Author.
  • Qian, D. D. (2009). Comparing direct and semi-direct modes for speaking assessment: Affective effects on test takers. Language Assessment Quarterly, 6(2), 113-125. https://doi.org/10.1080/15434300902800059.
  • Roca-Varela, M. L., & Palacios, I. M. (2013). How are spoken skills assessed in proficiency tests of general English as a foreign language? A preliminary survey. International Journal of English Studies, 13(2), 53-68. http://dx.doi.org/10.6018/ijes.13.2.185901.
  • Saadé, R. G., & Kira, D. (2007). Mediating the impact of technology usage on perceived ease of use by anxiety. Computers & Education, 49(4), 1189-1204. https://doi.org/10.1016/j.compedu.2006.01.009.
  • Shohamy, E. (1994). The validity of direct versus semi-direct oral tests. Language Testing, 11(2), 99-123. https://doi.org/10.1177/026553229401100202.
  • Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating inter-rater reliability. Practical Assessment, Research & Evaluation, 9(4). Retrieved from http://pareonline.net/getvn.asp?v=9&n=4.
  • Taylor, C., Kirsch, I., Jamieson, J., & Eignor, D. (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. A Journal of Research in Language Studies, 49(2), 219-274. https://doi.org/10.1111/0023-8333.00088.
  • Underhill, N. (1987). Testing spoken language: A handbook of oral testing techniques. Cambridge, UK: Cambridge University Press.
  • Xi, X. (2010). Automated scoring and feedback systems: Where are we and where are we heading? Language Testing, 27(3), 291-300. https://doi.org/10.1177/0265532210364643.
  • Zechner, K., & Xi, X. (2008, June). Towards automatic scoring of a test of spoken language with heterogeneous task types. In Proceedings of the third acl workshop on innovative use of NPL for building educational applications (pp. 98-106). Stroudsburg, US: Association for Computational Linguistics. https://doi.org/10.3115/1631836.1631848.
  • Zechner, K., Higgins, D., Xi, X., & Williamson, D. M. (2009). Automatic scoring of non-native spontaneous speech in tests of spoken English. Speech Communication, 51(10), 883-895. https://doi.org/10.1016/j.specom.2009.04.009.
  • Zhan Y., & Wan, Z. H. (2016). Test takers’ beliefs and experiences of a high-stakes computer-based English listening and speaking test. RELC Journal, 47(3), 363-376. https://doi.org/10.1177/0033688216631174.
  • Zhou, Y. (2015). Computer-delivered or face-to-face: effects of delivery-mode on the testing of second language speaking. Language Testing in Asia, 5(2). https://doi.org/10.1186/s40468-014-0012-y.